Test Report: Docker_Linux_containerd_arm64 19711

                    
                      f2dddbc2cec1d99a0bb3d71de73f46a47f499a62:2024-09-27:36389
                    
                

Test fail (1/327)

Order failed test Duration
29 TestAddons/serial/Volcano 199.76
x
+
TestAddons/serial/Volcano (199.76s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:843: volcano-admission stabilized in 50.884822ms
addons_test.go:851: volcano-controller stabilized in 51.340401ms
addons_test.go:835: volcano-scheduler stabilized in 51.551502ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-c8k87" [be14af5a-fb69-46c5-8517-bbf9e05a120e] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004457987s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-c5bgp" [e617ad33-0ec5-4966-af21-d49ef4ac8f40] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003928267s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-bm7lv" [e94847f6-586e-4562-b434-b217b69d61bf] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003380374s
addons_test.go:870: (dbg) Run:  kubectl --context addons-376302 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-376302 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-376302 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [c97720c9-fdc7-4270-b650-23398f92e360] Pending
helpers_test.go:344: "test-job-nginx-0" [c97720c9-fdc7-4270-b650-23398f92e360] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:902: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:902: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-376302 -n addons-376302
addons_test.go:902: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-27 00:30:56.243299106 +0000 UTC m=+429.686123473
addons_test.go:902: (dbg) Run:  kubectl --context addons-376302 describe po test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-376302 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-584a85e4-8e56-4885-8569-103e2d00b5e8
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9v2vs (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-9v2vs:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:902: (dbg) Run:  kubectl --context addons-376302 logs test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-376302 logs test-job-nginx-0 -n my-volcano:
addons_test.go:903: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-376302
helpers_test.go:235: (dbg) docker inspect addons-376302:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "56b458f3a125ce0d3e35c1efb5ab21b0d0ee44ad655897875a0b4981679bb1ab",
	        "Created": "2024-09-27T00:24:28.459709474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 590339,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-27T00:24:28.607197623Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/56b458f3a125ce0d3e35c1efb5ab21b0d0ee44ad655897875a0b4981679bb1ab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/56b458f3a125ce0d3e35c1efb5ab21b0d0ee44ad655897875a0b4981679bb1ab/hostname",
	        "HostsPath": "/var/lib/docker/containers/56b458f3a125ce0d3e35c1efb5ab21b0d0ee44ad655897875a0b4981679bb1ab/hosts",
	        "LogPath": "/var/lib/docker/containers/56b458f3a125ce0d3e35c1efb5ab21b0d0ee44ad655897875a0b4981679bb1ab/56b458f3a125ce0d3e35c1efb5ab21b0d0ee44ad655897875a0b4981679bb1ab-json.log",
	        "Name": "/addons-376302",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-376302:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-376302",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d1ea8a27628d81624b2c8bc14224aff6021d12768ed9c84d0189ad54e0919a3b-init/diff:/var/lib/docker/overlay2/bde64bdaf549207dbef5d6ae31e43a20e66f52572944c42fb69017a1243b58d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d1ea8a27628d81624b2c8bc14224aff6021d12768ed9c84d0189ad54e0919a3b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d1ea8a27628d81624b2c8bc14224aff6021d12768ed9c84d0189ad54e0919a3b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d1ea8a27628d81624b2c8bc14224aff6021d12768ed9c84d0189ad54e0919a3b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-376302",
	                "Source": "/var/lib/docker/volumes/addons-376302/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-376302",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-376302",
	                "name.minikube.sigs.k8s.io": "addons-376302",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bfa7256fd418af487413ae43dd6d5164a7571da1ef38026131fb98696ce6b118",
	            "SandboxKey": "/var/run/docker/netns/bfa7256fd418",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-376302": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "dacb264def115d400235cc41d43a7c977a20b010ce495fba058fa2a190af267d",
	                    "EndpointID": "1a224adb092e2ac52787a73bd2a4c21c2e65d31449c54c91ebd9c638d7ded400",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-376302",
	                        "56b458f3a125"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-376302 -n addons-376302
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-376302 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-376302 logs -n 25: (1.491527329s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-607949   | jenkins | v1.34.0 | 27 Sep 24 00:23 UTC |                     |
	|         | -p download-only-607949              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 27 Sep 24 00:23 UTC | 27 Sep 24 00:23 UTC |
	| delete  | -p download-only-607949              | download-only-607949   | jenkins | v1.34.0 | 27 Sep 24 00:23 UTC | 27 Sep 24 00:23 UTC |
	| start   | -o=json --download-only              | download-only-981151   | jenkins | v1.34.0 | 27 Sep 24 00:23 UTC |                     |
	|         | -p download-only-981151              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 27 Sep 24 00:24 UTC | 27 Sep 24 00:24 UTC |
	| delete  | -p download-only-981151              | download-only-981151   | jenkins | v1.34.0 | 27 Sep 24 00:24 UTC | 27 Sep 24 00:24 UTC |
	| delete  | -p download-only-607949              | download-only-607949   | jenkins | v1.34.0 | 27 Sep 24 00:24 UTC | 27 Sep 24 00:24 UTC |
	| delete  | -p download-only-981151              | download-only-981151   | jenkins | v1.34.0 | 27 Sep 24 00:24 UTC | 27 Sep 24 00:24 UTC |
	| start   | --download-only -p                   | download-docker-397635 | jenkins | v1.34.0 | 27 Sep 24 00:24 UTC |                     |
	|         | download-docker-397635               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-397635            | download-docker-397635 | jenkins | v1.34.0 | 27 Sep 24 00:24 UTC | 27 Sep 24 00:24 UTC |
	| start   | --download-only -p                   | binary-mirror-548949   | jenkins | v1.34.0 | 27 Sep 24 00:24 UTC |                     |
	|         | binary-mirror-548949                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37853               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-548949              | binary-mirror-548949   | jenkins | v1.34.0 | 27 Sep 24 00:24 UTC | 27 Sep 24 00:24 UTC |
	| addons  | disable dashboard -p                 | addons-376302          | jenkins | v1.34.0 | 27 Sep 24 00:24 UTC |                     |
	|         | addons-376302                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-376302          | jenkins | v1.34.0 | 27 Sep 24 00:24 UTC |                     |
	|         | addons-376302                        |                        |         |         |                     |                     |
	| start   | -p addons-376302 --wait=true         | addons-376302          | jenkins | v1.34.0 | 27 Sep 24 00:24 UTC | 27 Sep 24 00:27 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:24:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:24:04.088958  589846 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:24:04.089115  589846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:24:04.089139  589846 out.go:358] Setting ErrFile to fd 2...
	I0927 00:24:04.089146  589846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:24:04.089433  589846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-583677/.minikube/bin
	I0927 00:24:04.089922  589846 out.go:352] Setting JSON to false
	I0927 00:24:04.090899  589846 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14779,"bootTime":1727381865,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0927 00:24:04.090980  589846 start.go:139] virtualization:  
	I0927 00:24:04.093496  589846 out.go:177] * [addons-376302] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 00:24:04.095671  589846 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:24:04.095776  589846 notify.go:220] Checking for updates...
	I0927 00:24:04.099156  589846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:24:04.101327  589846 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-583677/kubeconfig
	I0927 00:24:04.103591  589846 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-583677/.minikube
	I0927 00:24:04.105626  589846 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 00:24:04.107458  589846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:24:04.109275  589846 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:24:04.129018  589846 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:24:04.129153  589846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:24:04.198776  589846 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-27 00:24:04.189660439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:24:04.198896  589846 docker.go:318] overlay module found
	I0927 00:24:04.200973  589846 out.go:177] * Using the docker driver based on user configuration
	I0927 00:24:04.203360  589846 start.go:297] selected driver: docker
	I0927 00:24:04.203383  589846 start.go:901] validating driver "docker" against <nil>
	I0927 00:24:04.203397  589846 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:24:04.204158  589846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:24:04.250384  589846 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-27 00:24:04.241196729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:24:04.250628  589846 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:24:04.250863  589846 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:24:04.253578  589846 out.go:177] * Using Docker driver with root privileges
	I0927 00:24:04.255753  589846 cni.go:84] Creating CNI manager for ""
	I0927 00:24:04.255826  589846 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0927 00:24:04.255840  589846 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 00:24:04.255927  589846 start.go:340] cluster config:
	{Name:addons-376302 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-376302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:24:04.258228  589846 out.go:177] * Starting "addons-376302" primary control-plane node in "addons-376302" cluster
	I0927 00:24:04.260025  589846 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0927 00:24:04.261660  589846 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0927 00:24:04.263337  589846 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0927 00:24:04.263388  589846 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-583677/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0927 00:24:04.263400  589846 cache.go:56] Caching tarball of preloaded images
	I0927 00:24:04.263450  589846 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 00:24:04.263492  589846 preload.go:172] Found /home/jenkins/minikube-integration/19711-583677/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 00:24:04.263504  589846 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0927 00:24:04.263862  589846 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/config.json ...
	I0927 00:24:04.263895  589846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/config.json: {Name:mk969fce461c4c8f5125b14256912cc924363dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:24:04.277121  589846 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 00:24:04.277301  589846 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 00:24:04.277323  589846 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0927 00:24:04.277328  589846 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0927 00:24:04.277336  589846 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0927 00:24:04.277342  589846 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0927 00:24:21.217095  589846 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0927 00:24:21.217134  589846 cache.go:194] Successfully downloaded all kic artifacts
	I0927 00:24:21.217163  589846 start.go:360] acquireMachinesLock for addons-376302: {Name:mkd2ecedf1b6ed0010f5a71780cd08374b5d43a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:24:21.218129  589846 start.go:364] duration metric: took 924.468µs to acquireMachinesLock for "addons-376302"
	I0927 00:24:21.218171  589846 start.go:93] Provisioning new machine with config: &{Name:addons-376302 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-376302 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0927 00:24:21.218296  589846 start.go:125] createHost starting for "" (driver="docker")
	I0927 00:24:21.220597  589846 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0927 00:24:21.220845  589846 start.go:159] libmachine.API.Create for "addons-376302" (driver="docker")
	I0927 00:24:21.220886  589846 client.go:168] LocalClient.Create starting
	I0927 00:24:21.221005  589846 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19711-583677/.minikube/certs/ca.pem
	I0927 00:24:21.879775  589846 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19711-583677/.minikube/certs/cert.pem
	I0927 00:24:22.155675  589846 cli_runner.go:164] Run: docker network inspect addons-376302 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0927 00:24:22.172552  589846 cli_runner.go:211] docker network inspect addons-376302 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0927 00:24:22.172636  589846 network_create.go:284] running [docker network inspect addons-376302] to gather additional debugging logs...
	I0927 00:24:22.172660  589846 cli_runner.go:164] Run: docker network inspect addons-376302
	W0927 00:24:22.188178  589846 cli_runner.go:211] docker network inspect addons-376302 returned with exit code 1
	I0927 00:24:22.188211  589846 network_create.go:287] error running [docker network inspect addons-376302]: docker network inspect addons-376302: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-376302 not found
	I0927 00:24:22.188224  589846 network_create.go:289] output of [docker network inspect addons-376302]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-376302 not found
	
	** /stderr **
	I0927 00:24:22.188315  589846 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 00:24:22.204264  589846 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001884f10}
	I0927 00:24:22.204312  589846 network_create.go:124] attempt to create docker network addons-376302 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0927 00:24:22.204368  589846 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-376302 addons-376302
	I0927 00:24:22.271327  589846 network_create.go:108] docker network addons-376302 192.168.49.0/24 created
	I0927 00:24:22.271358  589846 kic.go:121] calculated static IP "192.168.49.2" for the "addons-376302" container
	I0927 00:24:22.271443  589846 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0927 00:24:22.285689  589846 cli_runner.go:164] Run: docker volume create addons-376302 --label name.minikube.sigs.k8s.io=addons-376302 --label created_by.minikube.sigs.k8s.io=true
	I0927 00:24:22.302100  589846 oci.go:103] Successfully created a docker volume addons-376302
	I0927 00:24:22.302189  589846 cli_runner.go:164] Run: docker run --rm --name addons-376302-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-376302 --entrypoint /usr/bin/test -v addons-376302:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0927 00:24:24.358800  589846 cli_runner.go:217] Completed: docker run --rm --name addons-376302-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-376302 --entrypoint /usr/bin/test -v addons-376302:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (2.05657143s)
	I0927 00:24:24.358856  589846 oci.go:107] Successfully prepared a docker volume addons-376302
	I0927 00:24:24.358879  589846 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0927 00:24:24.358898  589846 kic.go:194] Starting extracting preloaded images to volume ...
	I0927 00:24:24.358987  589846 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19711-583677/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-376302:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0927 00:24:28.400004  589846 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19711-583677/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-376302:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (4.040977163s)
	I0927 00:24:28.400037  589846 kic.go:203] duration metric: took 4.041136276s to extract preloaded images to volume ...
	W0927 00:24:28.400176  589846 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0927 00:24:28.400308  589846 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0927 00:24:28.447116  589846 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-376302 --name addons-376302 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-376302 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-376302 --network addons-376302 --ip 192.168.49.2 --volume addons-376302:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0927 00:24:28.762749  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Running}}
	I0927 00:24:28.783901  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:28.816079  589846 cli_runner.go:164] Run: docker exec addons-376302 stat /var/lib/dpkg/alternatives/iptables
	I0927 00:24:28.899314  589846 oci.go:144] the created container "addons-376302" has a running status.
	I0927 00:24:28.899345  589846 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa...
	I0927 00:24:30.002509  589846 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0927 00:24:30.056110  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:30.080259  589846 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0927 00:24:30.080289  589846 kic_runner.go:114] Args: [docker exec --privileged addons-376302 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0927 00:24:30.154020  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:30.197535  589846 machine.go:93] provisionDockerMachine start ...
	I0927 00:24:30.197655  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:30.217924  589846 main.go:141] libmachine: Using SSH client type: native
	I0927 00:24:30.218212  589846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I0927 00:24:30.218229  589846 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 00:24:30.346093  589846 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-376302
	
	I0927 00:24:30.346115  589846 ubuntu.go:169] provisioning hostname "addons-376302"
	I0927 00:24:30.346182  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:30.363949  589846 main.go:141] libmachine: Using SSH client type: native
	I0927 00:24:30.364198  589846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I0927 00:24:30.364214  589846 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-376302 && echo "addons-376302" | sudo tee /etc/hostname
	I0927 00:24:30.506040  589846 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-376302
	
	I0927 00:24:30.506118  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:30.524205  589846 main.go:141] libmachine: Using SSH client type: native
	I0927 00:24:30.524456  589846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I0927 00:24:30.524478  589846 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-376302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-376302/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-376302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:24:30.654606  589846 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:24:30.654636  589846 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19711-583677/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-583677/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-583677/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-583677/.minikube}
	I0927 00:24:30.654655  589846 ubuntu.go:177] setting up certificates
	I0927 00:24:30.654665  589846 provision.go:84] configureAuth start
	I0927 00:24:30.654727  589846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-376302
	I0927 00:24:30.674537  589846 provision.go:143] copyHostCerts
	I0927 00:24:30.674628  589846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-583677/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-583677/.minikube/cert.pem (1123 bytes)
	I0927 00:24:30.674761  589846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-583677/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-583677/.minikube/key.pem (1679 bytes)
	I0927 00:24:30.674836  589846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-583677/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-583677/.minikube/ca.pem (1082 bytes)
	I0927 00:24:30.674895  589846 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-583677/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-583677/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-583677/.minikube/certs/ca-key.pem org=jenkins.addons-376302 san=[127.0.0.1 192.168.49.2 addons-376302 localhost minikube]
	I0927 00:24:31.002446  589846 provision.go:177] copyRemoteCerts
	I0927 00:24:31.002542  589846 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:24:31.002589  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:31.022489  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:24:31.115824  589846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-583677/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 00:24:31.140493  589846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-583677/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 00:24:31.164619  589846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-583677/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 00:24:31.188308  589846 provision.go:87] duration metric: took 533.629255ms to configureAuth
	I0927 00:24:31.188333  589846 ubuntu.go:193] setting minikube options for container-runtime
	I0927 00:24:31.188533  589846 config.go:182] Loaded profile config "addons-376302": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 00:24:31.188546  589846 machine.go:96] duration metric: took 990.982059ms to provisionDockerMachine
	I0927 00:24:31.188553  589846 client.go:171] duration metric: took 9.967657679s to LocalClient.Create
	I0927 00:24:31.188578  589846 start.go:167] duration metric: took 9.967733871s to libmachine.API.Create "addons-376302"
	I0927 00:24:31.188590  589846 start.go:293] postStartSetup for "addons-376302" (driver="docker")
	I0927 00:24:31.188600  589846 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:24:31.188652  589846 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:24:31.188695  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:31.205711  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:24:31.303140  589846 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:24:31.306104  589846 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0927 00:24:31.306142  589846 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0927 00:24:31.306156  589846 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0927 00:24:31.306163  589846 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0927 00:24:31.306173  589846 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-583677/.minikube/addons for local assets ...
	I0927 00:24:31.306239  589846 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-583677/.minikube/files for local assets ...
	I0927 00:24:31.306272  589846 start.go:296] duration metric: took 117.675618ms for postStartSetup
	I0927 00:24:31.306615  589846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-376302
	I0927 00:24:31.322584  589846 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/config.json ...
	I0927 00:24:31.322884  589846 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:24:31.323101  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:31.340929  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:24:31.431039  589846 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0927 00:24:31.435433  589846 start.go:128] duration metric: took 10.217119414s to createHost
	I0927 00:24:31.435458  589846 start.go:83] releasing machines lock for "addons-376302", held for 10.217310469s
	I0927 00:24:31.435528  589846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-376302
	I0927 00:24:31.451708  589846 ssh_runner.go:195] Run: cat /version.json
	I0927 00:24:31.451763  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:31.451768  589846 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:24:31.451844  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:31.469547  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:24:31.470062  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:24:31.557802  589846 ssh_runner.go:195] Run: systemctl --version
	I0927 00:24:31.686214  589846 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0927 00:24:31.690678  589846 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0927 00:24:31.715880  589846 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0927 00:24:31.715981  589846 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:24:31.745592  589846 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0927 00:24:31.745618  589846 start.go:495] detecting cgroup driver to use...
	I0927 00:24:31.745674  589846 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0927 00:24:31.745749  589846 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0927 00:24:31.758335  589846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0927 00:24:31.770102  589846 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:24:31.770184  589846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:24:31.784364  589846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:24:31.799116  589846 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:24:31.886538  589846 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:24:31.973823  589846 docker.go:233] disabling docker service ...
	I0927 00:24:31.973887  589846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:24:31.993171  589846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:24:32.009445  589846 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:24:32.101954  589846 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:24:32.189957  589846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:24:32.201623  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:24:32.218791  589846 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0927 00:24:32.228882  589846 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0927 00:24:32.238873  589846 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0927 00:24:32.238985  589846 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0927 00:24:32.248967  589846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 00:24:32.259003  589846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0927 00:24:32.269376  589846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 00:24:32.279518  589846 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:24:32.289222  589846 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0927 00:24:32.299042  589846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0927 00:24:32.308920  589846 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0927 00:24:32.318849  589846 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:24:32.327785  589846 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:24:32.336582  589846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:24:32.424434  589846 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0927 00:24:32.564039  589846 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0927 00:24:32.564163  589846 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0927 00:24:32.568000  589846 start.go:563] Will wait 60s for crictl version
	I0927 00:24:32.568112  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:24:32.571716  589846 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:24:32.606261  589846 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0927 00:24:32.606386  589846 ssh_runner.go:195] Run: containerd --version
	I0927 00:24:32.630876  589846 ssh_runner.go:195] Run: containerd --version
	I0927 00:24:32.654226  589846 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0927 00:24:32.656458  589846 cli_runner.go:164] Run: docker network inspect addons-376302 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 00:24:32.670721  589846 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0927 00:24:32.674101  589846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:24:32.684375  589846 kubeadm.go:883] updating cluster {Name:addons-376302 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-376302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 00:24:32.684494  589846 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0927 00:24:32.684560  589846 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:24:32.720341  589846 containerd.go:627] all images are preloaded for containerd runtime.
	I0927 00:24:32.720365  589846 containerd.go:534] Images already preloaded, skipping extraction
	I0927 00:24:32.720426  589846 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:24:32.755293  589846 containerd.go:627] all images are preloaded for containerd runtime.
	I0927 00:24:32.755318  589846 cache_images.go:84] Images are preloaded, skipping loading
	I0927 00:24:32.755326  589846 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0927 00:24:32.755418  589846 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-376302 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-376302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:24:32.755493  589846 ssh_runner.go:195] Run: sudo crictl info
	I0927 00:24:32.792512  589846 cni.go:84] Creating CNI manager for ""
	I0927 00:24:32.792536  589846 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0927 00:24:32.792547  589846 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 00:24:32.792569  589846 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-376302 NodeName:addons-376302 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 00:24:32.792697  589846 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-376302"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 00:24:32.792771  589846 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:24:32.801746  589846 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 00:24:32.801863  589846 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 00:24:32.810562  589846 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0927 00:24:32.828403  589846 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:24:32.847137  589846 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0927 00:24:32.864855  589846 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0927 00:24:32.868551  589846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:24:32.879662  589846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:24:32.966188  589846 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:24:32.981857  589846 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302 for IP: 192.168.49.2
	I0927 00:24:32.981880  589846 certs.go:194] generating shared ca certs ...
	I0927 00:24:32.981897  589846 certs.go:226] acquiring lock for ca certs: {Name:mk008a6c957a7b891b6a534ee8dfae7b680b060c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:24:32.982110  589846 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-583677/.minikube/ca.key
	I0927 00:24:33.181137  589846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-583677/.minikube/ca.crt ...
	I0927 00:24:33.181170  589846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-583677/.minikube/ca.crt: {Name:mkc45fb74234d083d260abca12aab7acb9eb21fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:24:33.181936  589846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-583677/.minikube/ca.key ...
	I0927 00:24:33.181953  589846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-583677/.minikube/ca.key: {Name:mk93c8a50a2653fc9664c4633c48656e87ac595f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:24:33.182052  589846 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-583677/.minikube/proxy-client-ca.key
	I0927 00:24:33.407364  589846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-583677/.minikube/proxy-client-ca.crt ...
	I0927 00:24:33.407394  589846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-583677/.minikube/proxy-client-ca.crt: {Name:mk9be3228cc003fe659fbb559b9f1f34fbe74bba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:24:33.408098  589846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-583677/.minikube/proxy-client-ca.key ...
	I0927 00:24:33.408116  589846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-583677/.minikube/proxy-client-ca.key: {Name:mkdff29ea1a8b181ec44fcf84d2b49373856e7e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:24:33.408199  589846 certs.go:256] generating profile certs ...
	I0927 00:24:33.408260  589846 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.key
	I0927 00:24:33.408276  589846 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt with IP's: []
	I0927 00:24:33.618843  589846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt ...
	I0927 00:24:33.618893  589846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: {Name:mkbf03d2274ce60910f238a804ace6b75bcc30cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:24:33.619738  589846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.key ...
	I0927 00:24:33.619757  589846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.key: {Name:mk7f23e5ea4a6531cc05c95a3e810674f429c521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:24:33.620287  589846 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/apiserver.key.64290ad9
	I0927 00:24:33.620313  589846 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/apiserver.crt.64290ad9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0927 00:24:33.825788  589846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/apiserver.crt.64290ad9 ...
	I0927 00:24:33.825818  589846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/apiserver.crt.64290ad9: {Name:mkf5cc25ae0116d5336f6fbc08f2cf4e9ebf9c98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:24:33.826002  589846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/apiserver.key.64290ad9 ...
	I0927 00:24:33.826016  589846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/apiserver.key.64290ad9: {Name:mkaa541ada476a4753fb43f2968bc5f94ce4b002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:24:33.826099  589846 certs.go:381] copying /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/apiserver.crt.64290ad9 -> /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/apiserver.crt
	I0927 00:24:33.826181  589846 certs.go:385] copying /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/apiserver.key.64290ad9 -> /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/apiserver.key
	I0927 00:24:33.826234  589846 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/proxy-client.key
	I0927 00:24:33.826254  589846 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/proxy-client.crt with IP's: []
	I0927 00:24:34.824279  589846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/proxy-client.crt ...
	I0927 00:24:34.824312  589846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/proxy-client.crt: {Name:mka388dc129fbea48897e69519f3f021b4e4aae4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:24:34.824502  589846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/proxy-client.key ...
	I0927 00:24:34.824520  589846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/proxy-client.key: {Name:mkbb7155b1d8755fe903e6c09513c260f231d70d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:24:34.824718  589846 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-583677/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 00:24:34.824764  589846 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-583677/.minikube/certs/ca.pem (1082 bytes)
	I0927 00:24:34.824792  589846 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-583677/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:24:34.824820  589846 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-583677/.minikube/certs/key.pem (1679 bytes)
	I0927 00:24:34.825669  589846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-583677/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:24:34.849753  589846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-583677/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 00:24:34.876069  589846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-583677/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:24:34.902068  589846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-583677/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 00:24:34.926177  589846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0927 00:24:34.950876  589846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 00:24:34.977198  589846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:24:35.010601  589846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 00:24:35.050265  589846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-583677/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:24:35.076671  589846 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 00:24:35.097048  589846 ssh_runner.go:195] Run: openssl version
	I0927 00:24:35.103276  589846 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:24:35.113905  589846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:24:35.117781  589846 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:24:35.117848  589846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:24:35.125173  589846 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:24:35.134687  589846 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:24:35.138308  589846 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:24:35.138359  589846 kubeadm.go:392] StartCluster: {Name:addons-376302 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-376302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:24:35.138441  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0927 00:24:35.138541  589846 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 00:24:35.177547  589846 cri.go:89] found id: ""
	I0927 00:24:35.177630  589846 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 00:24:35.187747  589846 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 00:24:35.196640  589846 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0927 00:24:35.196736  589846 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 00:24:35.205825  589846 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 00:24:35.205847  589846 kubeadm.go:157] found existing configuration files:
	
	I0927 00:24:35.205921  589846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 00:24:35.214762  589846 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 00:24:35.214894  589846 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 00:24:35.223679  589846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 00:24:35.233054  589846 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 00:24:35.233146  589846 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 00:24:35.241438  589846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 00:24:35.250076  589846 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 00:24:35.250168  589846 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 00:24:35.258785  589846 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 00:24:35.267836  589846 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 00:24:35.267899  589846 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 00:24:35.276162  589846 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0927 00:24:35.317149  589846 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 00:24:35.317213  589846 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 00:24:35.350366  589846 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0927 00:24:35.350440  589846 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0927 00:24:35.350495  589846 kubeadm.go:310] OS: Linux
	I0927 00:24:35.350541  589846 kubeadm.go:310] CGROUPS_CPU: enabled
	I0927 00:24:35.350596  589846 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0927 00:24:35.350649  589846 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0927 00:24:35.350702  589846 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0927 00:24:35.350754  589846 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0927 00:24:35.350808  589846 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0927 00:24:35.350863  589846 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0927 00:24:35.350915  589846 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0927 00:24:35.350964  589846 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0927 00:24:35.422722  589846 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 00:24:35.422861  589846 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 00:24:35.422957  589846 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 00:24:35.428399  589846 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 00:24:35.431091  589846 out.go:235]   - Generating certificates and keys ...
	I0927 00:24:35.431188  589846 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 00:24:35.431255  589846 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 00:24:35.685122  589846 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 00:24:36.102118  589846 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 00:24:36.636667  589846 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 00:24:36.813018  589846 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 00:24:37.185872  589846 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 00:24:37.186094  589846 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-376302 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0927 00:24:37.754036  589846 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 00:24:37.754180  589846 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-376302 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0927 00:24:38.054081  589846 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 00:24:38.664032  589846 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 00:24:39.076433  589846 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 00:24:39.076599  589846 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 00:24:39.624527  589846 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 00:24:40.520184  589846 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 00:24:41.137987  589846 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 00:24:41.733601  589846 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 00:24:42.675518  589846 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 00:24:42.676394  589846 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 00:24:42.679371  589846 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 00:24:42.681663  589846 out.go:235]   - Booting up control plane ...
	I0927 00:24:42.681761  589846 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 00:24:42.681838  589846 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 00:24:42.682393  589846 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 00:24:42.692942  589846 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 00:24:42.699317  589846 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 00:24:42.699677  589846 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 00:24:42.800282  589846 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 00:24:42.800406  589846 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 00:24:43.801776  589846 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001595411s
	I0927 00:24:43.801873  589846 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 00:24:49.803415  589846 kubeadm.go:310] [api-check] The API server is healthy after 6.001624452s
	I0927 00:24:49.822822  589846 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 00:24:49.836045  589846 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 00:24:49.857934  589846 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 00:24:49.858449  589846 kubeadm.go:310] [mark-control-plane] Marking the node addons-376302 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 00:24:49.870002  589846 kubeadm.go:310] [bootstrap-token] Using token: 29k8v1.wlnmbgivp7440xc5
	I0927 00:24:49.872024  589846 out.go:235]   - Configuring RBAC rules ...
	I0927 00:24:49.872151  589846 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 00:24:49.878694  589846 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 00:24:49.886670  589846 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 00:24:49.890130  589846 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 00:24:49.893850  589846 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 00:24:49.897913  589846 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 00:24:50.210157  589846 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 00:24:50.635693  589846 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 00:24:51.210180  589846 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 00:24:51.211423  589846 kubeadm.go:310] 
	I0927 00:24:51.211503  589846 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 00:24:51.211515  589846 kubeadm.go:310] 
	I0927 00:24:51.211596  589846 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 00:24:51.211601  589846 kubeadm.go:310] 
	I0927 00:24:51.211627  589846 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 00:24:51.211685  589846 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 00:24:51.211736  589846 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 00:24:51.211740  589846 kubeadm.go:310] 
	I0927 00:24:51.211793  589846 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 00:24:51.211798  589846 kubeadm.go:310] 
	I0927 00:24:51.211845  589846 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 00:24:51.211850  589846 kubeadm.go:310] 
	I0927 00:24:51.211902  589846 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 00:24:51.211984  589846 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 00:24:51.212053  589846 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 00:24:51.212057  589846 kubeadm.go:310] 
	I0927 00:24:51.212140  589846 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 00:24:51.212215  589846 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 00:24:51.212221  589846 kubeadm.go:310] 
	I0927 00:24:51.212303  589846 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 29k8v1.wlnmbgivp7440xc5 \
	I0927 00:24:51.212404  589846 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:12f53ed8c2ac46c62d729381f4778f67102dcff5ba944c90cf10559fb62c21c5 \
	I0927 00:24:51.212425  589846 kubeadm.go:310] 	--control-plane 
	I0927 00:24:51.212429  589846 kubeadm.go:310] 
	I0927 00:24:51.212512  589846 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 00:24:51.212517  589846 kubeadm.go:310] 
	I0927 00:24:51.212597  589846 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 29k8v1.wlnmbgivp7440xc5 \
	I0927 00:24:51.212698  589846 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:12f53ed8c2ac46c62d729381f4778f67102dcff5ba944c90cf10559fb62c21c5 
	I0927 00:24:51.215253  589846 kubeadm.go:310] W0927 00:24:35.312017    1015 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:24:51.215546  589846 kubeadm.go:310] W0927 00:24:35.313490    1015 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:24:51.215758  589846 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0927 00:24:51.215865  589846 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 00:24:51.215885  589846 cni.go:84] Creating CNI manager for ""
	I0927 00:24:51.215893  589846 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0927 00:24:51.218907  589846 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0927 00:24:51.220372  589846 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0927 00:24:51.224172  589846 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0927 00:24:51.224192  589846 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0927 00:24:51.241233  589846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0927 00:24:51.524579  589846 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 00:24:51.524705  589846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:24:51.524788  589846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-376302 minikube.k8s.io/updated_at=2024_09_27T00_24_51_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=addons-376302 minikube.k8s.io/primary=true
	I0927 00:24:51.761861  589846 ops.go:34] apiserver oom_adj: -16
	I0927 00:24:51.761994  589846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:24:52.262980  589846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:24:52.762482  589846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:24:53.262665  589846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:24:53.762114  589846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:24:54.262776  589846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:24:54.762572  589846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:24:55.262902  589846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:24:55.417882  589846 kubeadm.go:1113] duration metric: took 3.893219968s to wait for elevateKubeSystemPrivileges
	I0927 00:24:55.417907  589846 kubeadm.go:394] duration metric: took 20.279552515s to StartCluster
	I0927 00:24:55.417925  589846 settings.go:142] acquiring lock: {Name:mkd70c1e53f86501638b4918726bcbef07279ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:24:55.418045  589846 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-583677/kubeconfig
	I0927 00:24:55.418428  589846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-583677/kubeconfig: {Name:mk62ce40e80630f5ee25a51f6742eda23f381c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:24:55.418693  589846 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0927 00:24:55.418900  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 00:24:55.419140  589846 config.go:182] Loaded profile config "addons-376302": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 00:24:55.419177  589846 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0927 00:24:55.419280  589846 addons.go:69] Setting yakd=true in profile "addons-376302"
	I0927 00:24:55.419293  589846 addons.go:234] Setting addon yakd=true in "addons-376302"
	I0927 00:24:55.419317  589846 host.go:66] Checking if "addons-376302" exists ...
	I0927 00:24:55.419775  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:55.420068  589846 addons.go:69] Setting inspektor-gadget=true in profile "addons-376302"
	I0927 00:24:55.420085  589846 addons.go:234] Setting addon inspektor-gadget=true in "addons-376302"
	I0927 00:24:55.420109  589846 host.go:66] Checking if "addons-376302" exists ...
	I0927 00:24:55.420519  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:55.421058  589846 addons.go:69] Setting cloud-spanner=true in profile "addons-376302"
	I0927 00:24:55.421077  589846 addons.go:234] Setting addon cloud-spanner=true in "addons-376302"
	I0927 00:24:55.421100  589846 host.go:66] Checking if "addons-376302" exists ...
	I0927 00:24:55.421508  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:55.422774  589846 addons.go:69] Setting metrics-server=true in profile "addons-376302"
	I0927 00:24:55.422877  589846 addons.go:234] Setting addon metrics-server=true in "addons-376302"
	I0927 00:24:55.423128  589846 host.go:66] Checking if "addons-376302" exists ...
	I0927 00:24:55.423015  589846 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-376302"
	I0927 00:24:55.424940  589846 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-376302"
	I0927 00:24:55.425003  589846 host.go:66] Checking if "addons-376302" exists ...
	I0927 00:24:55.425559  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:55.428451  589846 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-376302"
	I0927 00:24:55.428516  589846 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-376302"
	I0927 00:24:55.428554  589846 host.go:66] Checking if "addons-376302" exists ...
	I0927 00:24:55.429024  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:55.436002  589846 addons.go:69] Setting default-storageclass=true in profile "addons-376302"
	I0927 00:24:55.436039  589846 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-376302"
	I0927 00:24:55.436383  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:55.423031  589846 addons.go:69] Setting registry=true in profile "addons-376302"
	I0927 00:24:55.451826  589846 addons.go:234] Setting addon registry=true in "addons-376302"
	I0927 00:24:55.451872  589846 host.go:66] Checking if "addons-376302" exists ...
	I0927 00:24:55.452345  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:55.423039  589846 addons.go:69] Setting storage-provisioner=true in profile "addons-376302"
	I0927 00:24:55.465506  589846 addons.go:234] Setting addon storage-provisioner=true in "addons-376302"
	I0927 00:24:55.465548  589846 host.go:66] Checking if "addons-376302" exists ...
	I0927 00:24:55.466035  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:55.472872  589846 addons.go:69] Setting gcp-auth=true in profile "addons-376302"
	I0927 00:24:55.472906  589846 mustload.go:65] Loading cluster: addons-376302
	I0927 00:24:55.473176  589846 config.go:182] Loaded profile config "addons-376302": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 00:24:55.473459  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:55.423046  589846 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-376302"
	I0927 00:24:55.482771  589846 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-376302"
	I0927 00:24:55.483138  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:55.500430  589846 addons.go:69] Setting ingress=true in profile "addons-376302"
	I0927 00:24:55.500466  589846 addons.go:234] Setting addon ingress=true in "addons-376302"
	I0927 00:24:55.500509  589846 host.go:66] Checking if "addons-376302" exists ...
	I0927 00:24:55.500996  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:55.423054  589846 addons.go:69] Setting volcano=true in profile "addons-376302"
	I0927 00:24:55.506286  589846 addons.go:234] Setting addon volcano=true in "addons-376302"
	I0927 00:24:55.506327  589846 host.go:66] Checking if "addons-376302" exists ...
	I0927 00:24:55.506854  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:55.526625  589846 addons.go:69] Setting ingress-dns=true in profile "addons-376302"
	I0927 00:24:55.526654  589846 addons.go:234] Setting addon ingress-dns=true in "addons-376302"
	I0927 00:24:55.526700  589846 host.go:66] Checking if "addons-376302" exists ...
	I0927 00:24:55.527198  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:55.423061  589846 addons.go:69] Setting volumesnapshots=true in profile "addons-376302"
	I0927 00:24:55.530960  589846 addons.go:234] Setting addon volumesnapshots=true in "addons-376302"
	I0927 00:24:55.531022  589846 host.go:66] Checking if "addons-376302" exists ...
	I0927 00:24:55.531722  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:55.557440  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:55.578665  589846 out.go:177] * Verifying Kubernetes components...
	I0927 00:24:55.585423  589846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:24:55.605139  589846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0927 00:24:55.604847  589846 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0927 00:24:55.608961  589846 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0927 00:24:55.616369  589846 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0927 00:24:55.616675  589846 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0927 00:24:55.616866  589846 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0927 00:24:55.616879  589846 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0927 00:24:55.616943  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:55.617287  589846 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0927 00:24:55.617298  589846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0927 00:24:55.617337  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:55.629457  589846 addons.go:234] Setting addon default-storageclass=true in "addons-376302"
	I0927 00:24:55.629548  589846 host.go:66] Checking if "addons-376302" exists ...
	I0927 00:24:55.630048  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:55.646963  589846 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0927 00:24:55.653055  589846 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:24:55.653078  589846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0927 00:24:55.653144  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:55.670301  589846 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0927 00:24:55.670326  589846 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0927 00:24:55.670395  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:55.678056  589846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0927 00:24:55.680080  589846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0927 00:24:55.683193  589846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0927 00:24:55.687176  589846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0927 00:24:55.690025  589846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0927 00:24:55.691813  589846 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0927 00:24:55.694112  589846 out.go:177]   - Using image docker.io/registry:2.8.3
	I0927 00:24:55.716613  589846 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0927 00:24:55.716633  589846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0927 00:24:55.716692  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:55.729211  589846 host.go:66] Checking if "addons-376302" exists ...
	I0927 00:24:55.732245  589846 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0927 00:24:55.732442  589846 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0927 00:24:55.733914  589846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0927 00:24:55.733933  589846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0927 00:24:55.733997  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:55.739092  589846 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0927 00:24:55.739826  589846 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0927 00:24:55.740436  589846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0927 00:24:55.740451  589846 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0927 00:24:55.740532  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:55.741843  589846 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 00:24:55.741868  589846 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 00:24:55.741923  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:55.761267  589846 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:24:55.761292  589846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0927 00:24:55.761354  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:55.790422  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:24:55.792606  589846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0927 00:24:55.793994  589846 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-376302"
	I0927 00:24:55.794077  589846 host.go:66] Checking if "addons-376302" exists ...
	I0927 00:24:55.794702  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:24:55.795883  589846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:24:55.801152  589846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:24:55.803254  589846 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:24:55.803278  589846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0927 00:24:55.803354  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:55.822344  589846 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 00:24:55.827851  589846 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:24:55.827880  589846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 00:24:55.827946  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:55.860964  589846 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0927 00:24:55.862595  589846 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0927 00:24:55.866051  589846 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0927 00:24:55.869106  589846 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0927 00:24:55.869131  589846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0927 00:24:55.869199  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:55.896607  589846 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 00:24:55.896631  589846 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 00:24:55.896690  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:55.916880  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:24:55.933278  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:24:55.946752  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:24:55.960960  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:24:55.988046  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:24:55.993978  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:24:55.997566  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:24:56.028905  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:24:56.032853  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:24:56.041199  589846 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0927 00:24:56.043036  589846 out.go:177]   - Using image docker.io/busybox:stable
	I0927 00:24:56.047860  589846 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:24:56.047895  589846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0927 00:24:56.047963  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:24:56.049909  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:24:56.066019  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	W0927 00:24:56.069313  589846 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0927 00:24:56.069343  589846 retry.go:31] will retry after 362.988195ms: ssh: handshake failed: EOF
	I0927 00:24:56.070430  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	W0927 00:24:56.072856  589846 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0927 00:24:56.072886  589846 retry.go:31] will retry after 370.033267ms: ssh: handshake failed: EOF
	I0927 00:24:56.090783  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:24:56.311494  589846 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0927 00:24:56.311564  589846 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0927 00:24:56.403217  589846 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:24:56.403288  589846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0927 00:24:56.415850  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 00:24:56.416034  589846 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:24:56.513106  589846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:24:56.529574  589846 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0927 00:24:56.529603  589846 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0927 00:24:56.533930  589846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0927 00:24:56.574838  589846 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0927 00:24:56.574870  589846 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0927 00:24:56.591994  589846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:24:56.605665  589846 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 00:24:56.605699  589846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0927 00:24:56.608771  589846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:24:56.626555  589846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:24:56.634743  589846 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0927 00:24:56.634765  589846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0927 00:24:56.644549  589846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 00:24:56.675960  589846 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0927 00:24:56.676040  589846 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0927 00:24:56.679939  589846 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0927 00:24:56.680011  589846 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0927 00:24:56.690612  589846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:24:56.757919  589846 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0927 00:24:56.757995  589846 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0927 00:24:56.767935  589846 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 00:24:56.768011  589846 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 00:24:56.868777  589846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0927 00:24:56.868833  589846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0927 00:24:56.917143  589846 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0927 00:24:56.917170  589846 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0927 00:24:56.945686  589846 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0927 00:24:56.945728  589846 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0927 00:24:57.012406  589846 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:24:57.012433  589846 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 00:24:57.041846  589846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0927 00:24:57.041873  589846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0927 00:24:57.066351  589846 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0927 00:24:57.066423  589846 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0927 00:24:57.068477  589846 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0927 00:24:57.068499  589846 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0927 00:24:57.088049  589846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:24:57.094177  589846 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0927 00:24:57.094204  589846 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0927 00:24:57.156859  589846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0927 00:24:57.215615  589846 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0927 00:24:57.215640  589846 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0927 00:24:57.230596  589846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:24:57.265459  589846 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0927 00:24:57.265486  589846 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0927 00:24:57.271482  589846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0927 00:24:57.271508  589846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0927 00:24:57.284207  589846 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:24:57.284233  589846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0927 00:24:57.441183  589846 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:24:57.441207  589846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0927 00:24:57.451518  589846 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0927 00:24:57.451544  589846 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0927 00:24:57.469611  589846 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0927 00:24:57.469637  589846 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0927 00:24:57.628683  589846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:24:57.770080  589846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:24:57.818125  589846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0927 00:24:57.818152  589846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0927 00:24:57.830230  589846 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:24:57.830256  589846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0927 00:24:58.125453  589846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0927 00:24:58.125479  589846 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0927 00:24:58.208557  589846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:24:58.389123  589846 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.973039473s)
	I0927 00:24:58.389964  589846 node_ready.go:35] waiting up to 6m0s for node "addons-376302" to be "Ready" ...
	I0927 00:24:58.390167  589846 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.974245129s)
	I0927 00:24:58.390188  589846 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0927 00:24:58.394969  589846 node_ready.go:49] node "addons-376302" has status "Ready":"True"
	I0927 00:24:58.395000  589846 node_ready.go:38] duration metric: took 5.001046ms for node "addons-376302" to be "Ready" ...
	I0927 00:24:58.395012  589846 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:24:58.406333  589846 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4jzbp" in "kube-system" namespace to be "Ready" ...
	I0927 00:24:58.544147  589846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0927 00:24:58.544170  589846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0927 00:24:58.783630  589846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0927 00:24:58.783655  589846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0927 00:24:58.910537  589846 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-376302" context rescaled to 1 replicas
	I0927 00:24:58.929556  589846 pod_ready.go:98] pod "coredns-7c65d6cfc9-4jzbp" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:24:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:24:55 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:24:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:24:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:24:55 +0000 UTC Reason: Message:}] Messa
ge: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2}] PodIP: PodIPs:[] StartTime:2024-09-27 00:24:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID: ContainerID: Started:0x400049d58a AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x40015f7e40} {Name:kube-api-access-vcq9j MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0x40015f7e50}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Res
ize: ResourceClaimStatuses:[]}
	I0927 00:24:58.929651  589846 pod_ready.go:82] duration metric: took 523.278312ms for pod "coredns-7c65d6cfc9-4jzbp" in "kube-system" namespace to be "Ready" ...
	E0927 00:24:58.929678  589846 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-4jzbp" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:24:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:24:55 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:24:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:24:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 00:24:55 +0000
UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2}] PodIP: PodIPs:[] StartTime:2024-09-27 00:24:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID: ContainerID: Started:0x400049d58a AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x40015f7e40} {Name:kube-api-access-vcq9j MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0x40015f7e50}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable Ephe
meralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0927 00:24:58.929729  589846 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace to be "Ready" ...
	I0927 00:24:59.185428  589846 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:24:59.185502  589846 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0927 00:24:59.628263  589846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:25:00.936626  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:02.940996  589846 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0927 00:25:02.941145  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:25:02.944397  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:02.980794  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:25:03.385703  589846 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0927 00:25:03.529856  589846 addons.go:234] Setting addon gcp-auth=true in "addons-376302"
	I0927 00:25:03.529958  589846 host.go:66] Checking if "addons-376302" exists ...
	I0927 00:25:03.530488  589846 cli_runner.go:164] Run: docker container inspect addons-376302 --format={{.State.Status}}
	I0927 00:25:03.547582  589846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.034434784s)
	I0927 00:25:03.547613  589846 addons.go:475] Verifying addon ingress=true in "addons-376302"
	I0927 00:25:03.547761  589846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.013802762s)
	I0927 00:25:03.547807  589846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.95578523s)
	I0927 00:25:03.547816  589846 addons.go:475] Verifying addon registry=true in "addons-376302"
	I0927 00:25:03.548049  589846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.939249586s)
	I0927 00:25:03.548109  589846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.921532514s)
	I0927 00:25:03.548147  589846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.90357934s)
	I0927 00:25:03.548373  589846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.857683473s)
	I0927 00:25:03.548488  589846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.460414699s)
	I0927 00:25:03.552029  589846 out.go:177] * Verifying ingress addon...
	I0927 00:25:03.553821  589846 out.go:177] * Verifying registry addon...
	I0927 00:25:03.557700  589846 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0927 00:25:03.558030  589846 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0927 00:25:03.565699  589846 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0927 00:25:03.565754  589846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376302
	I0927 00:25:03.572181  589846 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 00:25:03.572205  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0927 00:25:03.573018  589846 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0927 00:25:03.594232  589846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/addons-376302/id_rsa Username:docker}
	I0927 00:25:03.668872  589846 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0927 00:25:03.668961  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:04.064179  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:04.066411  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:04.569882  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:04.570560  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:05.114639  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:05.153066  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:05.154113  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:05.623046  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:05.624326  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:06.037351  589846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.880455714s)
	I0927 00:25:06.037483  589846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.806856256s)
	I0927 00:25:06.037537  589846 addons.go:475] Verifying addon metrics-server=true in "addons-376302"
	I0927 00:25:06.037567  589846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.408838505s)
	W0927 00:25:06.037600  589846 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:25:06.037630  589846 retry.go:31] will retry after 255.489638ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:25:06.037720  589846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.829119786s)
	I0927 00:25:06.037764  589846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.267529961s)
	I0927 00:25:06.037922  589846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.409578663s)
	I0927 00:25:06.037940  589846 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-376302"
	I0927 00:25:06.038078  589846 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.472356548s)
	I0927 00:25:06.040094  589846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:25:06.040103  589846 out.go:177] * Verifying csi-hostpath-driver addon...
	I0927 00:25:06.040234  589846 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-376302 service yakd-dashboard -n yakd-dashboard
	
	I0927 00:25:06.044310  589846 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0927 00:25:06.045140  589846 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0927 00:25:06.047901  589846 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0927 00:25:06.047939  589846 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0927 00:25:06.107559  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:06.108940  589846 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 00:25:06.109022  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:06.109552  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:06.128377  589846 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0927 00:25:06.128445  589846 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0927 00:25:06.213100  589846 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:25:06.213174  589846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0927 00:25:06.293460  589846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:25:06.340245  589846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:25:06.554826  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:06.562429  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:06.563355  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:07.067164  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:07.076543  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:07.077342  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:07.462996  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:07.551700  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:07.563797  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:07.565320  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:07.835509  589846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.541957517s)
	I0927 00:25:07.835616  589846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.495289559s)
	I0927 00:25:07.839976  589846 addons.go:475] Verifying addon gcp-auth=true in "addons-376302"
	I0927 00:25:07.847197  589846 out.go:177] * Verifying gcp-auth addon...
	I0927 00:25:07.849936  589846 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0927 00:25:07.853345  589846 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 00:25:08.051555  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:08.064244  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:08.065811  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:08.551549  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:08.565609  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:08.567142  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:09.050631  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:09.062936  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:09.063699  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:09.553188  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:09.564191  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:09.567455  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:09.936457  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:10.054641  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:10.063765  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:10.065472  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:10.550249  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:10.561730  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:10.562644  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:11.052730  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:11.062551  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:11.064352  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:11.550967  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:11.562531  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:11.565269  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:11.937009  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:12.050162  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:12.063595  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:12.063914  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:12.554862  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:12.566481  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:12.568250  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:13.050648  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:13.062702  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:13.063637  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:13.550708  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:13.563680  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:13.564777  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:14.050085  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:14.062964  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:14.063426  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:14.436433  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:14.550397  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:14.561740  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:14.562939  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:15.057347  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:15.064455  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:15.065420  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:15.549589  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:15.563714  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:15.564023  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:16.050978  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:16.063941  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:16.065364  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:16.550062  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:16.562216  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:16.563157  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:16.937942  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:17.050308  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:17.063538  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:17.064483  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:17.549822  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:17.561251  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:17.562916  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:18.052391  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:18.062828  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:18.063510  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:18.552009  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:18.567590  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:18.567763  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:19.051196  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:19.062511  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:19.063617  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:19.436957  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:19.550355  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:19.651368  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:19.653227  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:20.049789  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:20.062324  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:20.063796  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:20.550355  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:20.563498  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:20.564515  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:21.050177  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:21.062137  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:21.063166  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:21.550347  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:21.562116  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:21.562775  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:21.936138  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:22.049577  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:22.062548  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:22.063690  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:22.550583  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:22.562367  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:22.563344  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:23.050962  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:23.062273  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:23.064150  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:23.550000  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:23.562542  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:23.563126  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:23.936539  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:24.050234  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:24.063005  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:24.064048  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:24.550392  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:24.563305  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:24.564972  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:25.050114  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:25.062751  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:25.063355  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:25.550481  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:25.563058  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:25.564060  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:26.049688  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:26.061844  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:26.063216  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:26.435831  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:26.550428  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:26.562021  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:26.563399  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:27.050405  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:27.062407  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:27.063276  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:27.551411  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:27.650912  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:27.651566  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:28.049590  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:28.062649  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:28.063084  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:28.436354  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:28.550165  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:28.562285  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:28.562925  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:29.050447  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:29.064368  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:29.064994  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:29.550573  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:29.562130  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:29.563687  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:30.051814  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:30.071804  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:30.072205  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:30.550356  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:30.561339  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:30.562513  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:30.937285  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:31.050737  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:31.063826  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:31.064686  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:31.551699  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:31.651001  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:31.652767  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:32.049617  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:32.062329  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:32.062740  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:32.550772  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:32.563172  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:32.564157  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:33.049906  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:33.062740  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:33.063765  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:33.435954  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:33.551448  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:33.651819  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:33.652421  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:34.051588  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:34.065163  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:34.066160  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:34.550552  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:34.561995  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:34.563603  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:35.050902  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:35.063043  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:35.063362  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:35.436453  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:35.552528  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:35.561855  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:35.563342  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:36.050895  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:36.063145  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:36.063740  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:36.549693  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:36.561194  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:36.563705  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:37.050897  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:37.061595  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:37.063363  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:37.551101  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:37.562367  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:37.563325  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:37.937076  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:38.050498  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:38.063240  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:38.064096  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:38.550442  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:38.566594  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:38.568183  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:39.051793  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:39.062392  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:39.063483  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:39.550999  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:39.562208  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:39.563048  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:40.053168  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:40.062658  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:40.063919  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:40.436729  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:40.549694  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:40.563830  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:40.564158  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:41.051133  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:41.063148  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:41.063489  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:41.550518  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:41.563982  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:41.565542  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:42.050657  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:42.071345  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:42.071819  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:42.438473  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:42.550492  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:42.565293  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:42.566703  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:43.050103  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:43.062375  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:43.063759  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:43.550177  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:43.562628  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:43.563227  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:44.050527  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:44.062085  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:44.063103  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:44.551546  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:44.563550  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:44.565436  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:44.935486  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:45.052500  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:45.081910  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:45.097383  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:45.551834  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:45.563925  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:45.564695  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:46.056487  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:46.063403  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:46.063738  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:46.550276  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:46.562427  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:46.562642  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:46.936267  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:47.050186  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:47.062158  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:47.063216  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:47.549893  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:47.562928  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:47.565246  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:48.051322  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:48.063029  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:48.064027  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:48.550474  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:48.563200  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:48.564463  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:48.939756  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:49.049849  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:49.063182  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:49.064379  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:49.551710  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:49.562185  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:49.563144  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:50.050108  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:50.062583  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:25:50.064779  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:50.551169  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:50.563209  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:50.563489  589846 kapi.go:107] duration metric: took 47.005792749s to wait for kubernetes.io/minikube-addons=registry ...
	I0927 00:25:51.050296  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:51.062295  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:51.436619  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:51.554645  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:51.563108  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:52.050534  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:52.062415  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:52.549744  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:52.562659  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:53.051238  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:53.062956  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:53.552113  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:53.652149  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:53.937212  589846 pod_ready.go:103] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:54.052297  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:54.064610  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:54.553553  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:54.575357  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:54.938368  589846 pod_ready.go:93] pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace has status "Ready":"True"
	I0927 00:25:54.938440  589846 pod_ready.go:82] duration metric: took 56.008687822s for pod "coredns-7c65d6cfc9-7kj2z" in "kube-system" namespace to be "Ready" ...
	I0927 00:25:54.938482  589846 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-376302" in "kube-system" namespace to be "Ready" ...
	I0927 00:25:54.949941  589846 pod_ready.go:93] pod "etcd-addons-376302" in "kube-system" namespace has status "Ready":"True"
	I0927 00:25:54.950012  589846 pod_ready.go:82] duration metric: took 11.503739ms for pod "etcd-addons-376302" in "kube-system" namespace to be "Ready" ...
	I0927 00:25:54.950042  589846 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-376302" in "kube-system" namespace to be "Ready" ...
	I0927 00:25:54.962214  589846 pod_ready.go:93] pod "kube-apiserver-addons-376302" in "kube-system" namespace has status "Ready":"True"
	I0927 00:25:54.962284  589846 pod_ready.go:82] duration metric: took 12.221061ms for pod "kube-apiserver-addons-376302" in "kube-system" namespace to be "Ready" ...
	I0927 00:25:54.962311  589846 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-376302" in "kube-system" namespace to be "Ready" ...
	I0927 00:25:54.971728  589846 pod_ready.go:93] pod "kube-controller-manager-addons-376302" in "kube-system" namespace has status "Ready":"True"
	I0927 00:25:54.971795  589846 pod_ready.go:82] duration metric: took 9.464051ms for pod "kube-controller-manager-addons-376302" in "kube-system" namespace to be "Ready" ...
	I0927 00:25:54.971822  589846 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m5q7w" in "kube-system" namespace to be "Ready" ...
	I0927 00:25:55.002111  589846 pod_ready.go:93] pod "kube-proxy-m5q7w" in "kube-system" namespace has status "Ready":"True"
	I0927 00:25:55.002199  589846 pod_ready.go:82] duration metric: took 30.354983ms for pod "kube-proxy-m5q7w" in "kube-system" namespace to be "Ready" ...
	I0927 00:25:55.002227  589846 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-376302" in "kube-system" namespace to be "Ready" ...
	I0927 00:25:55.052355  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:55.063933  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:55.334774  589846 pod_ready.go:93] pod "kube-scheduler-addons-376302" in "kube-system" namespace has status "Ready":"True"
	I0927 00:25:55.334852  589846 pod_ready.go:82] duration metric: took 332.603057ms for pod "kube-scheduler-addons-376302" in "kube-system" namespace to be "Ready" ...
	I0927 00:25:55.334880  589846 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-wjw6j" in "kube-system" namespace to be "Ready" ...
	I0927 00:25:55.552460  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:55.566627  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:56.050053  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:56.062080  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:56.549906  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:56.568115  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:57.050556  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:57.062622  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:57.340851  589846 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-wjw6j" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:57.550420  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:57.562570  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:58.051482  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:58.065016  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:58.557206  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:58.564565  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:59.054405  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:59.062919  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:25:59.340905  589846 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-wjw6j" in "kube-system" namespace has status "Ready":"False"
	I0927 00:25:59.551764  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:25:59.651807  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:00.090662  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:00.105879  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:00.551214  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:00.562191  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:01.050044  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:01.062779  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:01.388430  589846 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-wjw6j" in "kube-system" namespace has status "Ready":"False"
	I0927 00:26:01.549417  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:01.562191  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:02.051939  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:02.063376  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:02.550450  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:02.563039  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:03.050668  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:03.063271  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:03.551285  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:03.565324  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:03.841679  589846 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-wjw6j" in "kube-system" namespace has status "Ready":"False"
	I0927 00:26:04.050519  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:04.063847  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:04.551226  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:04.651667  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:05.050944  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:05.062681  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:05.550285  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:05.563571  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:05.843344  589846 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-wjw6j" in "kube-system" namespace has status "Ready":"False"
	I0927 00:26:06.051018  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:06.063409  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:06.550163  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:06.563102  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:07.049753  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:07.063006  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:07.550895  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:07.563653  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:08.050663  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:08.063374  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:08.341761  589846 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-wjw6j" in "kube-system" namespace has status "Ready":"False"
	I0927 00:26:08.551238  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:08.563356  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:09.051649  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:09.064127  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:09.549750  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:09.562200  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:10.067695  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:10.071491  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:10.550433  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:10.650093  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:10.841454  589846 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-wjw6j" in "kube-system" namespace has status "Ready":"False"
	I0927 00:26:11.123313  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:11.123408  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:11.550697  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:11.562281  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:12.050650  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:12.062683  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:12.340557  589846 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-wjw6j" in "kube-system" namespace has status "Ready":"True"
	I0927 00:26:12.340584  589846 pod_ready.go:82] duration metric: took 17.005684427s for pod "nvidia-device-plugin-daemonset-wjw6j" in "kube-system" namespace to be "Ready" ...
	I0927 00:26:12.340594  589846 pod_ready.go:39] duration metric: took 1m13.945571466s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:26:12.340609  589846 api_server.go:52] waiting for apiserver process to appear ...
	I0927 00:26:12.340663  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0927 00:26:12.340726  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 00:26:12.392933  589846 cri.go:89] found id: "4f7d4f2ea82a3ba0369f64f959c96362a69a47e7aa17b2b833064d0de2690984"
	I0927 00:26:12.392998  589846 cri.go:89] found id: ""
	I0927 00:26:12.393021  589846 logs.go:276] 1 containers: [4f7d4f2ea82a3ba0369f64f959c96362a69a47e7aa17b2b833064d0de2690984]
	I0927 00:26:12.393104  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:12.397628  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0927 00:26:12.397698  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 00:26:12.435514  589846 cri.go:89] found id: "7740b0ab5972c339d1019f9d139514b291ad347c45d7e78e263cf6e91dcf2611"
	I0927 00:26:12.435538  589846 cri.go:89] found id: ""
	I0927 00:26:12.435548  589846 logs.go:276] 1 containers: [7740b0ab5972c339d1019f9d139514b291ad347c45d7e78e263cf6e91dcf2611]
	I0927 00:26:12.435604  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:12.439389  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0927 00:26:12.439464  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 00:26:12.479769  589846 cri.go:89] found id: "67c512afd43d2e80ba1ff931620f8eb0cd64a4edb278defa274ff39914966d10"
	I0927 00:26:12.479799  589846 cri.go:89] found id: ""
	I0927 00:26:12.479813  589846 logs.go:276] 1 containers: [67c512afd43d2e80ba1ff931620f8eb0cd64a4edb278defa274ff39914966d10]
	I0927 00:26:12.479875  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:12.483615  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0927 00:26:12.483684  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 00:26:12.527296  589846 cri.go:89] found id: "9705ae8c567943563ca9c096643beef9dc8914c2b3480205f7fbb9c24622df4c"
	I0927 00:26:12.527320  589846 cri.go:89] found id: ""
	I0927 00:26:12.527330  589846 logs.go:276] 1 containers: [9705ae8c567943563ca9c096643beef9dc8914c2b3480205f7fbb9c24622df4c]
	I0927 00:26:12.527390  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:12.532160  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0927 00:26:12.532268  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 00:26:12.549770  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:12.562957  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:12.592720  589846 cri.go:89] found id: "70222c66f1e2ab845e54fe52f713c4cb23e920f4022cfc3188f7e96d585a68ac"
	I0927 00:26:12.592739  589846 cri.go:89] found id: ""
	I0927 00:26:12.592748  589846 logs.go:276] 1 containers: [70222c66f1e2ab845e54fe52f713c4cb23e920f4022cfc3188f7e96d585a68ac]
	I0927 00:26:12.592821  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:12.596301  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 00:26:12.596387  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 00:26:12.637696  589846 cri.go:89] found id: "4b424e70c475b5fd67848be176bc70ecc24418788697034a3eda05ed24ef077e"
	I0927 00:26:12.637715  589846 cri.go:89] found id: ""
	I0927 00:26:12.637722  589846 logs.go:276] 1 containers: [4b424e70c475b5fd67848be176bc70ecc24418788697034a3eda05ed24ef077e]
	I0927 00:26:12.637776  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:12.641402  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0927 00:26:12.641468  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 00:26:12.688851  589846 cri.go:89] found id: "808cd2049ab3fb612ff58d4ce254cf76df8188497a44e3a97faa28c6ed3e6f58"
	I0927 00:26:12.688871  589846 cri.go:89] found id: ""
	I0927 00:26:12.688878  589846 logs.go:276] 1 containers: [808cd2049ab3fb612ff58d4ce254cf76df8188497a44e3a97faa28c6ed3e6f58]
	I0927 00:26:12.688930  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:12.692493  589846 logs.go:123] Gathering logs for describe nodes ...
	I0927 00:26:12.692528  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 00:26:12.900623  589846 logs.go:123] Gathering logs for coredns [67c512afd43d2e80ba1ff931620f8eb0cd64a4edb278defa274ff39914966d10] ...
	I0927 00:26:12.900649  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67c512afd43d2e80ba1ff931620f8eb0cd64a4edb278defa274ff39914966d10"
	I0927 00:26:12.952837  589846 logs.go:123] Gathering logs for kube-scheduler [9705ae8c567943563ca9c096643beef9dc8914c2b3480205f7fbb9c24622df4c] ...
	I0927 00:26:12.952869  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9705ae8c567943563ca9c096643beef9dc8914c2b3480205f7fbb9c24622df4c"
	I0927 00:26:13.034259  589846 logs.go:123] Gathering logs for kubelet ...
	I0927 00:26:13.034288  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 00:26:13.049908  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:13.062534  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0927 00:26:13.113305  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: W0927 00:25:04.611441    1482 reflector.go:561] object-"volcano-system"/"volcano-admission-configmap": failed to list *v1.ConfigMap: configmaps "volcano-admission-configmap" is forbidden: User "system:node:addons-376302" cannot list resource "configmaps" in API group "" in the namespace "volcano-system": no relationship found between node 'addons-376302' and this object
	W0927 00:26:13.113619  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611497    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"volcano-admission-configmap\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"volcano-admission-configmap\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	W0927 00:26:13.113837  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: W0927 00:25:04.611538    1482 reflector.go:561] object-"volcano-system"/"volcano-admission-secret": failed to list *v1.Secret: secrets "volcano-admission-secret" is forbidden: User "system:node:addons-376302" cannot list resource "secrets" in API group "" in the namespace "volcano-system": no relationship found between node 'addons-376302' and this object
	W0927 00:26:13.114092  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611549    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"volcano-admission-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"volcano-admission-secret\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"secrets\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	W0927 00:26:13.114299  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: W0927 00:25:04.611785    1482 reflector.go:561] object-"volcano-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-376302" cannot list resource "configmaps" in API group "" in the namespace "volcano-system": no relationship found between node 'addons-376302' and this object
	W0927 00:26:13.114561  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611806    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	I0927 00:26:13.208515  589846 logs.go:123] Gathering logs for dmesg ...
	I0927 00:26:13.208590  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 00:26:13.228111  589846 logs.go:123] Gathering logs for kube-apiserver [4f7d4f2ea82a3ba0369f64f959c96362a69a47e7aa17b2b833064d0de2690984] ...
	I0927 00:26:13.228138  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f7d4f2ea82a3ba0369f64f959c96362a69a47e7aa17b2b833064d0de2690984"
	I0927 00:26:13.366365  589846 logs.go:123] Gathering logs for etcd [7740b0ab5972c339d1019f9d139514b291ad347c45d7e78e263cf6e91dcf2611] ...
	I0927 00:26:13.366439  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7740b0ab5972c339d1019f9d139514b291ad347c45d7e78e263cf6e91dcf2611"
	I0927 00:26:13.430846  589846 logs.go:123] Gathering logs for kube-proxy [70222c66f1e2ab845e54fe52f713c4cb23e920f4022cfc3188f7e96d585a68ac] ...
	I0927 00:26:13.430927  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70222c66f1e2ab845e54fe52f713c4cb23e920f4022cfc3188f7e96d585a68ac"
	I0927 00:26:13.490731  589846 logs.go:123] Gathering logs for kube-controller-manager [4b424e70c475b5fd67848be176bc70ecc24418788697034a3eda05ed24ef077e] ...
	I0927 00:26:13.490806  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b424e70c475b5fd67848be176bc70ecc24418788697034a3eda05ed24ef077e"
	I0927 00:26:13.550582  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:13.563131  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:13.598845  589846 logs.go:123] Gathering logs for kindnet [808cd2049ab3fb612ff58d4ce254cf76df8188497a44e3a97faa28c6ed3e6f58] ...
	I0927 00:26:13.598893  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 808cd2049ab3fb612ff58d4ce254cf76df8188497a44e3a97faa28c6ed3e6f58"
	I0927 00:26:13.683560  589846 logs.go:123] Gathering logs for containerd ...
	I0927 00:26:13.683592  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0927 00:26:13.776309  589846 logs.go:123] Gathering logs for container status ...
	I0927 00:26:13.776346  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 00:26:13.833432  589846 out.go:358] Setting ErrFile to fd 2...
	I0927 00:26:13.833457  589846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 00:26:13.833505  589846 out.go:270] X Problems detected in kubelet:
	W0927 00:26:13.833521  589846 out.go:270]   Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611497    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"volcano-admission-configmap\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"volcano-admission-configmap\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	W0927 00:26:13.833528  589846 out.go:270]   Sep 27 00:25:04 addons-376302 kubelet[1482]: W0927 00:25:04.611538    1482 reflector.go:561] object-"volcano-system"/"volcano-admission-secret": failed to list *v1.Secret: secrets "volcano-admission-secret" is forbidden: User "system:node:addons-376302" cannot list resource "secrets" in API group "" in the namespace "volcano-system": no relationship found between node 'addons-376302' and this object
	W0927 00:26:13.833538  589846 out.go:270]   Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611549    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"volcano-admission-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"volcano-admission-secret\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"secrets\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	W0927 00:26:13.833544  589846 out.go:270]   Sep 27 00:25:04 addons-376302 kubelet[1482]: W0927 00:25:04.611785    1482 reflector.go:561] object-"volcano-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-376302" cannot list resource "configmaps" in API group "" in the namespace "volcano-system": no relationship found between node 'addons-376302' and this object
	W0927 00:26:13.833554  589846 out.go:270]   Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611806    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	I0927 00:26:13.833559  589846 out.go:358] Setting ErrFile to fd 2...
	I0927 00:26:13.833566  589846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:26:14.050759  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:14.062242  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:14.549908  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:14.563180  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:15.061083  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:15.064996  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:15.549660  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:15.563205  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:16.050821  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:16.062291  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:16.550788  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:16.562614  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:17.051014  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:17.067469  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:17.550212  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:17.565476  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:18.051352  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:18.063866  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:18.550948  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:18.563477  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:19.050779  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:19.062820  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:19.551299  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:19.562513  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:20.060806  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:20.064458  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:20.550137  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:20.563226  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:21.051223  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:21.064166  589846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:26:21.554751  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:21.563745  589846 kapi.go:107] duration metric: took 1m18.005708842s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0927 00:26:22.051338  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:22.557687  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:23.050033  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:23.550251  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:23.834802  589846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:26:23.852759  589846 api_server.go:72] duration metric: took 1m28.434033953s to wait for apiserver process to appear ...
	I0927 00:26:23.852823  589846 api_server.go:88] waiting for apiserver healthz status ...
	I0927 00:26:23.852875  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0927 00:26:23.852973  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 00:26:23.888174  589846 cri.go:89] found id: "4f7d4f2ea82a3ba0369f64f959c96362a69a47e7aa17b2b833064d0de2690984"
	I0927 00:26:23.888196  589846 cri.go:89] found id: ""
	I0927 00:26:23.888203  589846 logs.go:276] 1 containers: [4f7d4f2ea82a3ba0369f64f959c96362a69a47e7aa17b2b833064d0de2690984]
	I0927 00:26:23.888278  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:23.892332  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0927 00:26:23.892403  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 00:26:23.948926  589846 cri.go:89] found id: "7740b0ab5972c339d1019f9d139514b291ad347c45d7e78e263cf6e91dcf2611"
	I0927 00:26:23.948946  589846 cri.go:89] found id: ""
	I0927 00:26:23.948955  589846 logs.go:276] 1 containers: [7740b0ab5972c339d1019f9d139514b291ad347c45d7e78e263cf6e91dcf2611]
	I0927 00:26:23.949044  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:23.953268  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0927 00:26:23.953359  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 00:26:23.992571  589846 cri.go:89] found id: "67c512afd43d2e80ba1ff931620f8eb0cd64a4edb278defa274ff39914966d10"
	I0927 00:26:23.992593  589846 cri.go:89] found id: ""
	I0927 00:26:23.992601  589846 logs.go:276] 1 containers: [67c512afd43d2e80ba1ff931620f8eb0cd64a4edb278defa274ff39914966d10]
	I0927 00:26:23.992690  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:23.996938  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0927 00:26:23.997055  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 00:26:24.050215  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:24.053929  589846 cri.go:89] found id: "9705ae8c567943563ca9c096643beef9dc8914c2b3480205f7fbb9c24622df4c"
	I0927 00:26:24.053953  589846 cri.go:89] found id: ""
	I0927 00:26:24.053970  589846 logs.go:276] 1 containers: [9705ae8c567943563ca9c096643beef9dc8914c2b3480205f7fbb9c24622df4c]
	I0927 00:26:24.054049  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:24.061300  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0927 00:26:24.061411  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 00:26:24.111753  589846 cri.go:89] found id: "70222c66f1e2ab845e54fe52f713c4cb23e920f4022cfc3188f7e96d585a68ac"
	I0927 00:26:24.111783  589846 cri.go:89] found id: ""
	I0927 00:26:24.111792  589846 logs.go:276] 1 containers: [70222c66f1e2ab845e54fe52f713c4cb23e920f4022cfc3188f7e96d585a68ac]
	I0927 00:26:24.111912  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:24.115741  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 00:26:24.115840  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 00:26:24.165826  589846 cri.go:89] found id: "4b424e70c475b5fd67848be176bc70ecc24418788697034a3eda05ed24ef077e"
	I0927 00:26:24.165846  589846 cri.go:89] found id: ""
	I0927 00:26:24.165854  589846 logs.go:276] 1 containers: [4b424e70c475b5fd67848be176bc70ecc24418788697034a3eda05ed24ef077e]
	I0927 00:26:24.165913  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:24.172749  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0927 00:26:24.172834  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 00:26:24.224683  589846 cri.go:89] found id: "808cd2049ab3fb612ff58d4ce254cf76df8188497a44e3a97faa28c6ed3e6f58"
	I0927 00:26:24.224755  589846 cri.go:89] found id: ""
	I0927 00:26:24.224771  589846 logs.go:276] 1 containers: [808cd2049ab3fb612ff58d4ce254cf76df8188497a44e3a97faa28c6ed3e6f58]
	I0927 00:26:24.224850  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:24.228207  589846 logs.go:123] Gathering logs for coredns [67c512afd43d2e80ba1ff931620f8eb0cd64a4edb278defa274ff39914966d10] ...
	I0927 00:26:24.228272  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67c512afd43d2e80ba1ff931620f8eb0cd64a4edb278defa274ff39914966d10"
	I0927 00:26:24.280447  589846 logs.go:123] Gathering logs for kube-scheduler [9705ae8c567943563ca9c096643beef9dc8914c2b3480205f7fbb9c24622df4c] ...
	I0927 00:26:24.280475  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9705ae8c567943563ca9c096643beef9dc8914c2b3480205f7fbb9c24622df4c"
	I0927 00:26:24.335151  589846 logs.go:123] Gathering logs for kube-controller-manager [4b424e70c475b5fd67848be176bc70ecc24418788697034a3eda05ed24ef077e] ...
	I0927 00:26:24.335180  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b424e70c475b5fd67848be176bc70ecc24418788697034a3eda05ed24ef077e"
	I0927 00:26:24.429675  589846 logs.go:123] Gathering logs for kindnet [808cd2049ab3fb612ff58d4ce254cf76df8188497a44e3a97faa28c6ed3e6f58] ...
	I0927 00:26:24.429761  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 808cd2049ab3fb612ff58d4ce254cf76df8188497a44e3a97faa28c6ed3e6f58"
	I0927 00:26:24.481326  589846 logs.go:123] Gathering logs for containerd ...
	I0927 00:26:24.481499  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0927 00:26:24.568367  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:24.593361  589846 logs.go:123] Gathering logs for kubelet ...
	I0927 00:26:24.593437  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 00:26:24.655704  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: W0927 00:25:04.611441    1482 reflector.go:561] object-"volcano-system"/"volcano-admission-configmap": failed to list *v1.ConfigMap: configmaps "volcano-admission-configmap" is forbidden: User "system:node:addons-376302" cannot list resource "configmaps" in API group "" in the namespace "volcano-system": no relationship found between node 'addons-376302' and this object
	W0927 00:26:24.656049  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611497    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"volcano-admission-configmap\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"volcano-admission-configmap\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	W0927 00:26:24.656267  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: W0927 00:25:04.611538    1482 reflector.go:561] object-"volcano-system"/"volcano-admission-secret": failed to list *v1.Secret: secrets "volcano-admission-secret" is forbidden: User "system:node:addons-376302" cannot list resource "secrets" in API group "" in the namespace "volcano-system": no relationship found between node 'addons-376302' and this object
	W0927 00:26:24.656693  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611549    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"volcano-admission-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"volcano-admission-secret\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"secrets\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	W0927 00:26:24.656906  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: W0927 00:25:04.611785    1482 reflector.go:561] object-"volcano-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-376302" cannot list resource "configmaps" in API group "" in the namespace "volcano-system": no relationship found between node 'addons-376302' and this object
	W0927 00:26:24.657349  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611806    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	I0927 00:26:24.753927  589846 logs.go:123] Gathering logs for kube-apiserver [4f7d4f2ea82a3ba0369f64f959c96362a69a47e7aa17b2b833064d0de2690984] ...
	I0927 00:26:24.754007  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f7d4f2ea82a3ba0369f64f959c96362a69a47e7aa17b2b833064d0de2690984"
	I0927 00:26:24.862260  589846 logs.go:123] Gathering logs for etcd [7740b0ab5972c339d1019f9d139514b291ad347c45d7e78e263cf6e91dcf2611] ...
	I0927 00:26:24.862335  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7740b0ab5972c339d1019f9d139514b291ad347c45d7e78e263cf6e91dcf2611"
	I0927 00:26:24.915653  589846 logs.go:123] Gathering logs for container status ...
	I0927 00:26:24.915738  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 00:26:25.004317  589846 logs.go:123] Gathering logs for dmesg ...
	I0927 00:26:25.004407  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 00:26:25.037094  589846 logs.go:123] Gathering logs for describe nodes ...
	I0927 00:26:25.037121  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 00:26:25.050725  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:25.285189  589846 logs.go:123] Gathering logs for kube-proxy [70222c66f1e2ab845e54fe52f713c4cb23e920f4022cfc3188f7e96d585a68ac] ...
	I0927 00:26:25.285366  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70222c66f1e2ab845e54fe52f713c4cb23e920f4022cfc3188f7e96d585a68ac"
	I0927 00:26:25.326077  589846 out.go:358] Setting ErrFile to fd 2...
	I0927 00:26:25.326141  589846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 00:26:25.326202  589846 out.go:270] X Problems detected in kubelet:
	W0927 00:26:25.326247  589846 out.go:270]   Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611497    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"volcano-admission-configmap\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"volcano-admission-configmap\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	W0927 00:26:25.326280  589846 out.go:270]   Sep 27 00:25:04 addons-376302 kubelet[1482]: W0927 00:25:04.611538    1482 reflector.go:561] object-"volcano-system"/"volcano-admission-secret": failed to list *v1.Secret: secrets "volcano-admission-secret" is forbidden: User "system:node:addons-376302" cannot list resource "secrets" in API group "" in the namespace "volcano-system": no relationship found between node 'addons-376302' and this object
	W0927 00:26:25.326317  589846 out.go:270]   Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611549    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"volcano-admission-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"volcano-admission-secret\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"secrets\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	W0927 00:26:25.326350  589846 out.go:270]   Sep 27 00:25:04 addons-376302 kubelet[1482]: W0927 00:25:04.611785    1482 reflector.go:561] object-"volcano-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-376302" cannot list resource "configmaps" in API group "" in the namespace "volcano-system": no relationship found between node 'addons-376302' and this object
	W0927 00:26:25.326381  589846 out.go:270]   Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611806    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	I0927 00:26:25.326410  589846 out.go:358] Setting ErrFile to fd 2...
	I0927 00:26:25.326429  589846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:26:25.549998  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:26.051432  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:26.550166  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:27.051283  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:26:27.550813  589846 kapi.go:107] duration metric: took 1m21.505670609s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0927 00:26:30.856644  589846 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 00:26:30.856670  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:31.354132  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:31.853269  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:32.353801  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:32.854333  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:33.353446  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:33.853086  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:34.353802  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:34.853345  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:35.327472  589846 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0927 00:26:35.335826  589846 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0927 00:26:35.336841  589846 api_server.go:141] control plane version: v1.31.1
	I0927 00:26:35.336864  589846 api_server.go:131] duration metric: took 11.484020007s to wait for apiserver health ...
	I0927 00:26:35.336872  589846 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 00:26:35.336894  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0927 00:26:35.336958  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 00:26:35.353760  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:35.381569  589846 cri.go:89] found id: "4f7d4f2ea82a3ba0369f64f959c96362a69a47e7aa17b2b833064d0de2690984"
	I0927 00:26:35.381591  589846 cri.go:89] found id: ""
	I0927 00:26:35.381599  589846 logs.go:276] 1 containers: [4f7d4f2ea82a3ba0369f64f959c96362a69a47e7aa17b2b833064d0de2690984]
	I0927 00:26:35.381659  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:35.385244  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0927 00:26:35.385313  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 00:26:35.431533  589846 cri.go:89] found id: "7740b0ab5972c339d1019f9d139514b291ad347c45d7e78e263cf6e91dcf2611"
	I0927 00:26:35.431556  589846 cri.go:89] found id: ""
	I0927 00:26:35.431564  589846 logs.go:276] 1 containers: [7740b0ab5972c339d1019f9d139514b291ad347c45d7e78e263cf6e91dcf2611]
	I0927 00:26:35.431620  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:35.435185  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0927 00:26:35.435263  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 00:26:35.476042  589846 cri.go:89] found id: "67c512afd43d2e80ba1ff931620f8eb0cd64a4edb278defa274ff39914966d10"
	I0927 00:26:35.476064  589846 cri.go:89] found id: ""
	I0927 00:26:35.476072  589846 logs.go:276] 1 containers: [67c512afd43d2e80ba1ff931620f8eb0cd64a4edb278defa274ff39914966d10]
	I0927 00:26:35.476128  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:35.479939  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0927 00:26:35.480015  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 00:26:35.521726  589846 cri.go:89] found id: "9705ae8c567943563ca9c096643beef9dc8914c2b3480205f7fbb9c24622df4c"
	I0927 00:26:35.521748  589846 cri.go:89] found id: ""
	I0927 00:26:35.521756  589846 logs.go:276] 1 containers: [9705ae8c567943563ca9c096643beef9dc8914c2b3480205f7fbb9c24622df4c]
	I0927 00:26:35.521843  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:35.525517  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0927 00:26:35.525588  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 00:26:35.562815  589846 cri.go:89] found id: "70222c66f1e2ab845e54fe52f713c4cb23e920f4022cfc3188f7e96d585a68ac"
	I0927 00:26:35.562836  589846 cri.go:89] found id: ""
	I0927 00:26:35.562843  589846 logs.go:276] 1 containers: [70222c66f1e2ab845e54fe52f713c4cb23e920f4022cfc3188f7e96d585a68ac]
	I0927 00:26:35.562923  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:35.566439  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 00:26:35.566546  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 00:26:35.603323  589846 cri.go:89] found id: "4b424e70c475b5fd67848be176bc70ecc24418788697034a3eda05ed24ef077e"
	I0927 00:26:35.603345  589846 cri.go:89] found id: ""
	I0927 00:26:35.603352  589846 logs.go:276] 1 containers: [4b424e70c475b5fd67848be176bc70ecc24418788697034a3eda05ed24ef077e]
	I0927 00:26:35.603412  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:35.606859  589846 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0927 00:26:35.606939  589846 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 00:26:35.644360  589846 cri.go:89] found id: "808cd2049ab3fb612ff58d4ce254cf76df8188497a44e3a97faa28c6ed3e6f58"
	I0927 00:26:35.644382  589846 cri.go:89] found id: ""
	I0927 00:26:35.644391  589846 logs.go:276] 1 containers: [808cd2049ab3fb612ff58d4ce254cf76df8188497a44e3a97faa28c6ed3e6f58]
	I0927 00:26:35.644449  589846 ssh_runner.go:195] Run: which crictl
	I0927 00:26:35.648420  589846 logs.go:123] Gathering logs for kubelet ...
	I0927 00:26:35.648558  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 00:26:35.690141  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: W0927 00:25:04.611441    1482 reflector.go:561] object-"volcano-system"/"volcano-admission-configmap": failed to list *v1.ConfigMap: configmaps "volcano-admission-configmap" is forbidden: User "system:node:addons-376302" cannot list resource "configmaps" in API group "" in the namespace "volcano-system": no relationship found between node 'addons-376302' and this object
	W0927 00:26:35.690403  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611497    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"volcano-admission-configmap\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"volcano-admission-configmap\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	W0927 00:26:35.690631  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: W0927 00:25:04.611538    1482 reflector.go:561] object-"volcano-system"/"volcano-admission-secret": failed to list *v1.Secret: secrets "volcano-admission-secret" is forbidden: User "system:node:addons-376302" cannot list resource "secrets" in API group "" in the namespace "volcano-system": no relationship found between node 'addons-376302' and this object
	W0927 00:26:35.690865  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611549    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"volcano-admission-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"volcano-admission-secret\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"secrets\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	W0927 00:26:35.691052  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: W0927 00:25:04.611785    1482 reflector.go:561] object-"volcano-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-376302" cannot list resource "configmaps" in API group "" in the namespace "volcano-system": no relationship found between node 'addons-376302' and this object
	W0927 00:26:35.691277  589846 logs.go:138] Found kubelet problem: Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611806    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	I0927 00:26:35.790425  589846 logs.go:123] Gathering logs for describe nodes ...
	I0927 00:26:35.790463  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 00:26:35.855615  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:35.928547  589846 logs.go:123] Gathering logs for etcd [7740b0ab5972c339d1019f9d139514b291ad347c45d7e78e263cf6e91dcf2611] ...
	I0927 00:26:35.928576  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7740b0ab5972c339d1019f9d139514b291ad347c45d7e78e263cf6e91dcf2611"
	I0927 00:26:35.972297  589846 logs.go:123] Gathering logs for coredns [67c512afd43d2e80ba1ff931620f8eb0cd64a4edb278defa274ff39914966d10] ...
	I0927 00:26:35.972333  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67c512afd43d2e80ba1ff931620f8eb0cd64a4edb278defa274ff39914966d10"
	I0927 00:26:36.024269  589846 logs.go:123] Gathering logs for kube-scheduler [9705ae8c567943563ca9c096643beef9dc8914c2b3480205f7fbb9c24622df4c] ...
	I0927 00:26:36.024301  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9705ae8c567943563ca9c096643beef9dc8914c2b3480205f7fbb9c24622df4c"
	I0927 00:26:36.076244  589846 logs.go:123] Gathering logs for kube-proxy [70222c66f1e2ab845e54fe52f713c4cb23e920f4022cfc3188f7e96d585a68ac] ...
	I0927 00:26:36.076288  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70222c66f1e2ab845e54fe52f713c4cb23e920f4022cfc3188f7e96d585a68ac"
	I0927 00:26:36.118132  589846 logs.go:123] Gathering logs for container status ...
	I0927 00:26:36.118162  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 00:26:36.171638  589846 logs.go:123] Gathering logs for dmesg ...
	I0927 00:26:36.171668  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 00:26:36.188278  589846 logs.go:123] Gathering logs for kube-apiserver [4f7d4f2ea82a3ba0369f64f959c96362a69a47e7aa17b2b833064d0de2690984] ...
	I0927 00:26:36.188307  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f7d4f2ea82a3ba0369f64f959c96362a69a47e7aa17b2b833064d0de2690984"
	I0927 00:26:36.280294  589846 logs.go:123] Gathering logs for kube-controller-manager [4b424e70c475b5fd67848be176bc70ecc24418788697034a3eda05ed24ef077e] ...
	I0927 00:26:36.280331  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b424e70c475b5fd67848be176bc70ecc24418788697034a3eda05ed24ef077e"
	I0927 00:26:36.354237  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:36.357116  589846 logs.go:123] Gathering logs for kindnet [808cd2049ab3fb612ff58d4ce254cf76df8188497a44e3a97faa28c6ed3e6f58] ...
	I0927 00:26:36.357149  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 808cd2049ab3fb612ff58d4ce254cf76df8188497a44e3a97faa28c6ed3e6f58"
	I0927 00:26:36.409635  589846 logs.go:123] Gathering logs for containerd ...
	I0927 00:26:36.409664  589846 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0927 00:26:36.490359  589846 out.go:358] Setting ErrFile to fd 2...
	I0927 00:26:36.490390  589846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 00:26:36.490475  589846 out.go:270] X Problems detected in kubelet:
	W0927 00:26:36.490488  589846 out.go:270]   Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611497    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"volcano-admission-configmap\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"volcano-admission-configmap\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	W0927 00:26:36.490495  589846 out.go:270]   Sep 27 00:25:04 addons-376302 kubelet[1482]: W0927 00:25:04.611538    1482 reflector.go:561] object-"volcano-system"/"volcano-admission-secret": failed to list *v1.Secret: secrets "volcano-admission-secret" is forbidden: User "system:node:addons-376302" cannot list resource "secrets" in API group "" in the namespace "volcano-system": no relationship found between node 'addons-376302' and this object
	W0927 00:26:36.490502  589846 out.go:270]   Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611549    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"volcano-admission-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"volcano-admission-secret\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"secrets\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	W0927 00:26:36.490510  589846 out.go:270]   Sep 27 00:25:04 addons-376302 kubelet[1482]: W0927 00:25:04.611785    1482 reflector.go:561] object-"volcano-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-376302" cannot list resource "configmaps" in API group "" in the namespace "volcano-system": no relationship found between node 'addons-376302' and this object
	W0927 00:26:36.490518  589846 out.go:270]   Sep 27 00:25:04 addons-376302 kubelet[1482]: E0927 00:25:04.611806    1482 reflector.go:158] "Unhandled Error" err="object-\"volcano-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-376302\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"volcano-system\": no relationship found between node 'addons-376302' and this object" logger="UnhandledError"
	I0927 00:26:36.490529  589846 out.go:358] Setting ErrFile to fd 2...
	I0927 00:26:36.490535  589846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:26:36.853765  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:37.353362  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:37.852986  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:38.354117  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:38.853371  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:39.353155  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:39.854428  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:40.354139  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:40.853092  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:41.354336  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:41.853758  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:42.355190  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:42.853321  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:43.353816  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:43.853549  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:44.354339  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:44.855010  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:45.356023  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:45.853403  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:46.354526  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:46.501603  589846 system_pods.go:59] 18 kube-system pods found
	I0927 00:26:46.501690  589846 system_pods.go:61] "coredns-7c65d6cfc9-7kj2z" [0e9d70eb-bd84-48cf-a69a-c107799c6ce2] Running
	I0927 00:26:46.501712  589846 system_pods.go:61] "csi-hostpath-attacher-0" [658357ee-6b68-4118-99de-606d3eb09a2e] Running
	I0927 00:26:46.501733  589846 system_pods.go:61] "csi-hostpath-resizer-0" [5d4c5d6a-7a0a-4587-9fd2-a54245b2b12b] Running
	I0927 00:26:46.501766  589846 system_pods.go:61] "csi-hostpathplugin-4lcld" [3f23d6a5-9f44-4de4-bcf4-28939ccf2b77] Running
	I0927 00:26:46.501792  589846 system_pods.go:61] "etcd-addons-376302" [f97a17e9-257a-4301-82a8-8e04feb4b16b] Running
	I0927 00:26:46.501812  589846 system_pods.go:61] "kindnet-c9rgc" [30fd8606-584c-48de-88ac-d9331440a35d] Running
	I0927 00:26:46.501831  589846 system_pods.go:61] "kube-apiserver-addons-376302" [87d2f921-4a69-44df-a6f3-3c0eec8a32e3] Running
	I0927 00:26:46.501853  589846 system_pods.go:61] "kube-controller-manager-addons-376302" [7848f9c8-5b21-46cf-8a42-597a584755b6] Running
	I0927 00:26:46.501881  589846 system_pods.go:61] "kube-ingress-dns-minikube" [5afdfdfb-5a10-4caa-b45d-473693ec26dc] Running
	I0927 00:26:46.501904  589846 system_pods.go:61] "kube-proxy-m5q7w" [a58b0736-e856-43a5-93b4-38b34d8baa0b] Running
	I0927 00:26:46.501923  589846 system_pods.go:61] "kube-scheduler-addons-376302" [4ed47002-2c06-406e-bc25-03dfed4f987a] Running
	I0927 00:26:46.501943  589846 system_pods.go:61] "metrics-server-84c5f94fbc-vtnnr" [bbb2738a-95e1-47e4-b01e-618faac704d5] Running
	I0927 00:26:46.501960  589846 system_pods.go:61] "nvidia-device-plugin-daemonset-wjw6j" [473f336e-2969-4568-9fc5-c53cc41f9fb9] Running
	I0927 00:26:46.501987  589846 system_pods.go:61] "registry-66c9cd494c-pkjkw" [5f4663c9-a883-4664-8c1a-10bd6625888a] Running
	I0927 00:26:46.502014  589846 system_pods.go:61] "registry-proxy-nxrj2" [33cbf2e0-38da-4675-b8c1-d7be72de0161] Running
	I0927 00:26:46.502033  589846 system_pods.go:61] "snapshot-controller-56fcc65765-c96j8" [cfe711ea-3a21-4ccd-89ad-0ed1f0c3fc45] Running
	I0927 00:26:46.502052  589846 system_pods.go:61] "snapshot-controller-56fcc65765-grxpg" [493813e3-31bb-4f7c-894f-5e1cb73b4138] Running
	I0927 00:26:46.502072  589846 system_pods.go:61] "storage-provisioner" [35453f84-97b4-42e4-9fc7-9a1b4f42ae06] Running
	I0927 00:26:46.502101  589846 system_pods.go:74] duration metric: took 11.165220177s to wait for pod list to return data ...
	I0927 00:26:46.502129  589846 default_sa.go:34] waiting for default service account to be created ...
	I0927 00:26:46.505042  589846 default_sa.go:45] found service account: "default"
	I0927 00:26:46.505066  589846 default_sa.go:55] duration metric: took 2.917157ms for default service account to be created ...
	I0927 00:26:46.505076  589846 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 00:26:46.517178  589846 system_pods.go:86] 18 kube-system pods found
	I0927 00:26:46.517277  589846 system_pods.go:89] "coredns-7c65d6cfc9-7kj2z" [0e9d70eb-bd84-48cf-a69a-c107799c6ce2] Running
	I0927 00:26:46.517299  589846 system_pods.go:89] "csi-hostpath-attacher-0" [658357ee-6b68-4118-99de-606d3eb09a2e] Running
	I0927 00:26:46.517316  589846 system_pods.go:89] "csi-hostpath-resizer-0" [5d4c5d6a-7a0a-4587-9fd2-a54245b2b12b] Running
	I0927 00:26:46.517350  589846 system_pods.go:89] "csi-hostpathplugin-4lcld" [3f23d6a5-9f44-4de4-bcf4-28939ccf2b77] Running
	I0927 00:26:46.517377  589846 system_pods.go:89] "etcd-addons-376302" [f97a17e9-257a-4301-82a8-8e04feb4b16b] Running
	I0927 00:26:46.517398  589846 system_pods.go:89] "kindnet-c9rgc" [30fd8606-584c-48de-88ac-d9331440a35d] Running
	I0927 00:26:46.517418  589846 system_pods.go:89] "kube-apiserver-addons-376302" [87d2f921-4a69-44df-a6f3-3c0eec8a32e3] Running
	I0927 00:26:46.517449  589846 system_pods.go:89] "kube-controller-manager-addons-376302" [7848f9c8-5b21-46cf-8a42-597a584755b6] Running
	I0927 00:26:46.517472  589846 system_pods.go:89] "kube-ingress-dns-minikube" [5afdfdfb-5a10-4caa-b45d-473693ec26dc] Running
	I0927 00:26:46.517490  589846 system_pods.go:89] "kube-proxy-m5q7w" [a58b0736-e856-43a5-93b4-38b34d8baa0b] Running
	I0927 00:26:46.517509  589846 system_pods.go:89] "kube-scheduler-addons-376302" [4ed47002-2c06-406e-bc25-03dfed4f987a] Running
	I0927 00:26:46.517529  589846 system_pods.go:89] "metrics-server-84c5f94fbc-vtnnr" [bbb2738a-95e1-47e4-b01e-618faac704d5] Running
	I0927 00:26:46.517556  589846 system_pods.go:89] "nvidia-device-plugin-daemonset-wjw6j" [473f336e-2969-4568-9fc5-c53cc41f9fb9] Running
	I0927 00:26:46.517582  589846 system_pods.go:89] "registry-66c9cd494c-pkjkw" [5f4663c9-a883-4664-8c1a-10bd6625888a] Running
	I0927 00:26:46.517600  589846 system_pods.go:89] "registry-proxy-nxrj2" [33cbf2e0-38da-4675-b8c1-d7be72de0161] Running
	I0927 00:26:46.517619  589846 system_pods.go:89] "snapshot-controller-56fcc65765-c96j8" [cfe711ea-3a21-4ccd-89ad-0ed1f0c3fc45] Running
	I0927 00:26:46.517637  589846 system_pods.go:89] "snapshot-controller-56fcc65765-grxpg" [493813e3-31bb-4f7c-894f-5e1cb73b4138] Running
	I0927 00:26:46.517666  589846 system_pods.go:89] "storage-provisioner" [35453f84-97b4-42e4-9fc7-9a1b4f42ae06] Running
	I0927 00:26:46.517693  589846 system_pods.go:126] duration metric: took 12.610023ms to wait for k8s-apps to be running ...
	I0927 00:26:46.517714  589846 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 00:26:46.517803  589846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:26:46.539453  589846 system_svc.go:56] duration metric: took 21.717185ms WaitForService to wait for kubelet
	I0927 00:26:46.539529  589846 kubeadm.go:582] duration metric: took 1m51.120800059s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:26:46.539560  589846 node_conditions.go:102] verifying NodePressure condition ...
	I0927 00:26:46.542789  589846 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0927 00:26:46.542823  589846 node_conditions.go:123] node cpu capacity is 2
	I0927 00:26:46.542835  589846 node_conditions.go:105] duration metric: took 3.253393ms to run NodePressure ...
	I0927 00:26:46.542847  589846 start.go:241] waiting for startup goroutines ...
	I0927 00:26:46.853182  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:47.353276  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:47.854098  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:48.353969  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:48.853908  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:49.353138  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:49.853721  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:50.354060  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:50.853181  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:51.353793  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:51.852963  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:52.354044  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:52.853333  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:53.353799  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:53.853716  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:54.353824  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:54.853365  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:55.354111  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:55.854594  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:56.353548  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:56.853181  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:57.353680  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:57.853857  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:58.354253  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:58.854298  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:59.352938  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:26:59.853082  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:00.360302  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:00.854054  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:01.353973  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:01.853672  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:02.353938  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:02.853919  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:03.353928  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:03.853124  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:04.353616  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:04.853639  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:05.353959  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:05.853668  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:06.353952  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:06.853242  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:07.353658  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:07.854542  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:08.354300  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:08.854278  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:09.353960  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:09.853659  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:10.354196  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:10.857533  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:11.353690  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:11.853341  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:12.354070  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:12.854513  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:13.354013  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:13.854126  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:14.353756  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:14.854261  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:15.353988  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:15.853737  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:16.353718  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:16.853214  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:17.353660  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:17.854254  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:18.354395  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:18.854052  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:19.353803  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:19.853545  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:20.353437  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:20.853486  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:21.354015  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:21.853909  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:22.354303  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:22.854027  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:23.353664  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:23.853582  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:24.353426  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:24.854515  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:25.353376  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:25.853535  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:26.353234  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:26.853202  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:27.353889  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:27.854196  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:28.354248  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:28.854325  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:29.353597  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:29.853573  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:30.353467  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:30.853896  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:31.353413  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:31.853770  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:32.353071  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:32.853982  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:33.353823  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:33.853781  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:34.353197  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:34.854394  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:35.353578  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:35.853595  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:36.353535  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:36.854137  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:37.353540  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:37.855404  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:38.353285  589846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:27:38.854204  589846 kapi.go:107] duration metric: took 2m31.004265308s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0927 00:27:38.858129  589846 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-376302 cluster.
	I0927 00:27:38.860580  589846 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0927 00:27:38.862338  589846 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0927 00:27:38.864671  589846 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, storage-provisioner-rancher, volcano, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0927 00:27:38.866762  589846 addons.go:510] duration metric: took 2m43.447583101s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner storage-provisioner-rancher volcano metrics-server inspektor-gadget yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0927 00:27:38.866808  589846 start.go:246] waiting for cluster config update ...
	I0927 00:27:38.866831  589846 start.go:255] writing updated cluster config ...
	I0927 00:27:38.867113  589846 ssh_runner.go:195] Run: rm -f paused
	I0927 00:27:39.229711  589846 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 00:27:39.233516  589846 out.go:177] * Done! kubectl is now configured to use "addons-376302" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	df322167f54c8       4f725bf50aaa5       2 minutes ago       Exited              gadget                                   5                   97bec1de5aa74       gadget-zmj5f
	f1a7500408207       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   9c22e008b7f9c       gcp-auth-89d5ffd79-tbqxs
	1ab28178a137c       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   1b71574750900       csi-hostpathplugin-4lcld
	8f9433aa0eaac       642ded511e141       4 minutes ago       Running             csi-provisioner                          0                   1b71574750900       csi-hostpathplugin-4lcld
	f0d5ba9b31a8c       922312104da8a       4 minutes ago       Running             liveness-probe                           0                   1b71574750900       csi-hostpathplugin-4lcld
	fc275cd351a01       08f6b2990811a       4 minutes ago       Running             hostpath                                 0                   1b71574750900       csi-hostpathplugin-4lcld
	ca9501635c8fd       1a9605c872c1d       4 minutes ago       Running             admission                                0                   35aa985b2de42       volcano-admission-5874dfdd79-c5bgp
	ec84c8b9a3356       289a818c8d9c5       4 minutes ago       Running             controller                               0                   6b66cf23c5c36       ingress-nginx-controller-bc57996ff-pwskp
	357b1b4c6484f       0107d56dbc0be       4 minutes ago       Running             node-driver-registrar                    0                   1b71574750900       csi-hostpathplugin-4lcld
	4425b3981e646       487fa743e1e22       4 minutes ago       Running             csi-resizer                              0                   e4e20910111aa       csi-hostpath-resizer-0
	316c6e5d5b4fe       a9bac31a5be8d       4 minutes ago       Running             nvidia-device-plugin-ctr                 0                   fd6bd30adde40       nvidia-device-plugin-daemonset-wjw6j
	3886497d74009       77bdba588b953       4 minutes ago       Running             yakd                                     0                   fa36fc20752b8       yakd-dashboard-67d98fc6b-xlrnc
	f776700ca3132       420193b27261a       4 minutes ago       Exited              patch                                    2                   637179d0321bb       ingress-nginx-admission-patch-rv4dz
	3bdcaeef4c6f1       1461903ec4fe9       4 minutes ago       Running             csi-external-health-monitor-controller   0                   1b71574750900       csi-hostpathplugin-4lcld
	666ff7f01667f       23cbb28ae641a       4 minutes ago       Running             volcano-controllers                      0                   f10388a7cb2ba       volcano-controllers-789ffc5785-bm7lv
	89abc2e18c591       6aa88c604f2b4       4 minutes ago       Running             volcano-scheduler                        0                   49c8f282961a5       volcano-scheduler-6c9778cbdf-c8k87
	b2d48324f7029       420193b27261a       4 minutes ago       Exited              create                                   0                   9a800868e589d       ingress-nginx-admission-create-hrlmc
	70e70a044ea25       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   31ae2bdde9c27       csi-hostpath-attacher-0
	01a4874daef43       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   d026c44e5c472       metrics-server-84c5f94fbc-vtnnr
	67c512afd43d2       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   9bd9e3d843048       coredns-7c65d6cfc9-7kj2z
	62d6e8f484e1f       f7ed138f698f6       5 minutes ago       Running             registry-proxy                           0                   17d4c00649ae6       registry-proxy-nxrj2
	02ed227fed73e       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   5bfe149aba53c       local-path-provisioner-86d989889c-tr8ln
	08b23ed384c5c       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   eb7dd5af77340       registry-66c9cd494c-pkjkw
	c505185ce6761       be9cac3585579       5 minutes ago       Running             cloud-spanner-emulator                   0                   e32d850efee5c       cloud-spanner-emulator-5b584cc74-ml7ps
	9c09924997d0d       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   c38b9b6f8873a       snapshot-controller-56fcc65765-grxpg
	90065333162a4       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   dec2b21a8ebf9       snapshot-controller-56fcc65765-c96j8
	d0b7fd26302b5       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   ec2f34f33dc00       kube-ingress-dns-minikube
	886eea20089a6       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   a25e4c12dd763       storage-provisioner
	808cd2049ab3f       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   b700845149ee0       kindnet-c9rgc
	70222c66f1e2a       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   e80ca7c05bbc7       kube-proxy-m5q7w
	4b424e70c475b       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   60cb1d620c092       kube-controller-manager-addons-376302
	4f7d4f2ea82a3       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   5412bf86b4696       kube-apiserver-addons-376302
	9705ae8c56794       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   b6e34a2ccbd11       kube-scheduler-addons-376302
	7740b0ab5972c       27e3830e14027       6 minutes ago       Running             etcd                                     0                   6427177332cdb       etcd-addons-376302
	
	
	==> containerd <==
	Sep 27 00:27:50 addons-376302 containerd[813]: time="2024-09-27T00:27:50.629027676Z" level=info msg="RemovePodSandbox \"fa5337ace6ddb21d901e9945fb14401eb39821cf42abd8bb9c7b32749622a982\" returns successfully"
	Sep 27 00:28:31 addons-376302 containerd[813]: time="2024-09-27T00:28:31.501987030Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\""
	Sep 27 00:28:31 addons-376302 containerd[813]: time="2024-09-27T00:28:31.631002206Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 27 00:28:31 addons-376302 containerd[813]: time="2024-09-27T00:28:31.631240392Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 27 00:28:31 addons-376302 containerd[813]: time="2024-09-27T00:28:31.634557736Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 132.519596ms"
	Sep 27 00:28:31 addons-376302 containerd[813]: time="2024-09-27T00:28:31.634737969Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
	Sep 27 00:28:31 addons-376302 containerd[813]: time="2024-09-27T00:28:31.637046890Z" level=info msg="CreateContainer within sandbox \"97bec1de5aa749411171e5390070322b41d3b0f78b05a8acd6f19d1be792d70d\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Sep 27 00:28:31 addons-376302 containerd[813]: time="2024-09-27T00:28:31.658450138Z" level=info msg="CreateContainer within sandbox \"97bec1de5aa749411171e5390070322b41d3b0f78b05a8acd6f19d1be792d70d\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"df322167f54c8f6fdc7975c55ef6710315ecf1fe2a3330f47a9249d243df2298\""
	Sep 27 00:28:31 addons-376302 containerd[813]: time="2024-09-27T00:28:31.659418544Z" level=info msg="StartContainer for \"df322167f54c8f6fdc7975c55ef6710315ecf1fe2a3330f47a9249d243df2298\""
	Sep 27 00:28:31 addons-376302 containerd[813]: time="2024-09-27T00:28:31.719793605Z" level=info msg="StartContainer for \"df322167f54c8f6fdc7975c55ef6710315ecf1fe2a3330f47a9249d243df2298\" returns successfully"
	Sep 27 00:28:33 addons-376302 containerd[813]: time="2024-09-27T00:28:33.364949031Z" level=info msg="shim disconnected" id=df322167f54c8f6fdc7975c55ef6710315ecf1fe2a3330f47a9249d243df2298 namespace=k8s.io
	Sep 27 00:28:33 addons-376302 containerd[813]: time="2024-09-27T00:28:33.365014541Z" level=warning msg="cleaning up after shim disconnected" id=df322167f54c8f6fdc7975c55ef6710315ecf1fe2a3330f47a9249d243df2298 namespace=k8s.io
	Sep 27 00:28:33 addons-376302 containerd[813]: time="2024-09-27T00:28:33.365095607Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 27 00:28:33 addons-376302 containerd[813]: time="2024-09-27T00:28:33.721745181Z" level=info msg="RemoveContainer for \"aa8119690d1e5569ca3c7e803b0a1a9508116248af9c5103917aa9de901a4895\""
	Sep 27 00:28:33 addons-376302 containerd[813]: time="2024-09-27T00:28:33.728163788Z" level=info msg="RemoveContainer for \"aa8119690d1e5569ca3c7e803b0a1a9508116248af9c5103917aa9de901a4895\" returns successfully"
	Sep 27 00:28:50 addons-376302 containerd[813]: time="2024-09-27T00:28:50.638404532Z" level=info msg="RemoveContainer for \"c7193561651481697aa720fec0dd25ba2e114fce43f62b02738046343a770926\""
	Sep 27 00:28:50 addons-376302 containerd[813]: time="2024-09-27T00:28:50.644687960Z" level=info msg="RemoveContainer for \"c7193561651481697aa720fec0dd25ba2e114fce43f62b02738046343a770926\" returns successfully"
	Sep 27 00:28:50 addons-376302 containerd[813]: time="2024-09-27T00:28:50.646749744Z" level=info msg="StopPodSandbox for \"43d77c5188398c4e404535f0000dbc956c6cc5973788ad5853a30d417121c0ae\""
	Sep 27 00:28:50 addons-376302 containerd[813]: time="2024-09-27T00:28:50.655136719Z" level=info msg="TearDown network for sandbox \"43d77c5188398c4e404535f0000dbc956c6cc5973788ad5853a30d417121c0ae\" successfully"
	Sep 27 00:28:50 addons-376302 containerd[813]: time="2024-09-27T00:28:50.655178065Z" level=info msg="StopPodSandbox for \"43d77c5188398c4e404535f0000dbc956c6cc5973788ad5853a30d417121c0ae\" returns successfully"
	Sep 27 00:28:50 addons-376302 containerd[813]: time="2024-09-27T00:28:50.655791600Z" level=info msg="RemovePodSandbox for \"43d77c5188398c4e404535f0000dbc956c6cc5973788ad5853a30d417121c0ae\""
	Sep 27 00:28:50 addons-376302 containerd[813]: time="2024-09-27T00:28:50.655836359Z" level=info msg="Forcibly stopping sandbox \"43d77c5188398c4e404535f0000dbc956c6cc5973788ad5853a30d417121c0ae\""
	Sep 27 00:28:50 addons-376302 containerd[813]: time="2024-09-27T00:28:50.663898684Z" level=info msg="TearDown network for sandbox \"43d77c5188398c4e404535f0000dbc956c6cc5973788ad5853a30d417121c0ae\" successfully"
	Sep 27 00:28:50 addons-376302 containerd[813]: time="2024-09-27T00:28:50.675117844Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43d77c5188398c4e404535f0000dbc956c6cc5973788ad5853a30d417121c0ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 27 00:28:50 addons-376302 containerd[813]: time="2024-09-27T00:28:50.675393904Z" level=info msg="RemovePodSandbox \"43d77c5188398c4e404535f0000dbc956c6cc5973788ad5853a30d417121c0ae\" returns successfully"
	
	
	==> coredns [67c512afd43d2e80ba1ff931620f8eb0cd64a4edb278defa274ff39914966d10] <==
	[INFO] 10.244.0.6:41006 - 42494 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000106083s
	[INFO] 10.244.0.6:41006 - 26004 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002021579s
	[INFO] 10.244.0.6:41006 - 20492 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002191335s
	[INFO] 10.244.0.6:41006 - 1755 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000113551s
	[INFO] 10.244.0.6:41006 - 42613 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00007712s
	[INFO] 10.244.0.6:44650 - 22691 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000106256s
	[INFO] 10.244.0.6:44650 - 22904 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150087s
	[INFO] 10.244.0.6:49002 - 35784 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000089026s
	[INFO] 10.244.0.6:49002 - 36039 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000047581s
	[INFO] 10.244.0.6:47598 - 60650 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000073961s
	[INFO] 10.244.0.6:47598 - 60205 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000949s
	[INFO] 10.244.0.6:48247 - 62778 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00173107s
	[INFO] 10.244.0.6:48247 - 63223 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001707382s
	[INFO] 10.244.0.6:34691 - 20 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000109177s
	[INFO] 10.244.0.6:34691 - 456 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000082625s
	[INFO] 10.244.0.24:52976 - 24814 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000188939s
	[INFO] 10.244.0.24:38458 - 64713 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000294702s
	[INFO] 10.244.0.24:57465 - 18295 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000192204s
	[INFO] 10.244.0.24:42232 - 32899 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000269603s
	[INFO] 10.244.0.24:59898 - 12554 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000207589s
	[INFO] 10.244.0.24:56947 - 35148 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090527s
	[INFO] 10.244.0.24:48929 - 53715 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00257224s
	[INFO] 10.244.0.24:53347 - 12628 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002084717s
	[INFO] 10.244.0.24:38976 - 60360 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002161172s
	[INFO] 10.244.0.24:40243 - 31776 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002291937s
	
	
	==> describe nodes <==
	Name:               addons-376302
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-376302
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=addons-376302
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T00_24_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-376302
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-376302"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:24:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-376302
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:30:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:27:54 +0000   Fri, 27 Sep 2024 00:24:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:27:54 +0000   Fri, 27 Sep 2024 00:24:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:27:54 +0000   Fri, 27 Sep 2024 00:24:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:27:54 +0000   Fri, 27 Sep 2024 00:24:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-376302
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 3d8279570e0347b3be77cabd7e3528af
	  System UUID:                09148d62-bd21-4d0a-b61b-d0c0034a3256
	  Boot ID:                    da2f37bf-bf94-43b2-9935-20902b5113b2
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-ml7ps      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  gadget                      gadget-zmj5f                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  gcp-auth                    gcp-auth-89d5ffd79-tbqxs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-pwskp    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m54s
	  kube-system                 coredns-7c65d6cfc9-7kj2z                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m2s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpathplugin-4lcld                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 etcd-addons-376302                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m7s
	  kube-system                 kindnet-c9rgc                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m3s
	  kube-system                 kube-apiserver-addons-376302                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-controller-manager-addons-376302       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 kube-proxy-m5q7w                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-scheduler-addons-376302                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 metrics-server-84c5f94fbc-vtnnr             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m57s
	  kube-system                 nvidia-device-plugin-daemonset-wjw6j        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 registry-66c9cd494c-pkjkw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 registry-proxy-nxrj2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 snapshot-controller-56fcc65765-c96j8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 snapshot-controller-56fcc65765-grxpg        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  local-path-storage          local-path-provisioner-86d989889c-tr8ln     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  volcano-system              volcano-admission-5874dfdd79-c5bgp          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-controllers-789ffc5785-bm7lv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  volcano-system              volcano-scheduler-6c9778cbdf-c8k87          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-xlrnc              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 6m1s  kube-proxy       
	  Normal   Starting                 6m7s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m7s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m7s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m7s  kubelet          Node addons-376302 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m7s  kubelet          Node addons-376302 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m7s  kubelet          Node addons-376302 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m3s  node-controller  Node addons-376302 event: Registered Node addons-376302 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [7740b0ab5972c339d1019f9d139514b291ad347c45d7e78e263cf6e91dcf2611] <==
	{"level":"info","ts":"2024-09-27T00:24:44.354481Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-27T00:24:44.354673Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T00:24:44.354693Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T00:24:44.354750Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-27T00:24:44.354762Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-27T00:24:44.539678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-27T00:24:44.539887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-27T00:24:44.540024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-27T00:24:44.540105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-27T00:24:44.540187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-27T00:24:44.540269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-27T00:24:44.540343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-27T00:24:44.541889Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:24:44.543380Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-376302 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T00:24:44.543678Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T00:24:44.543868Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:24:44.544165Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:24:44.544264Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:24:44.543894Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T00:24:44.543919Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T00:24:44.545891Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T00:24:44.548229Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T00:24:44.549938Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T00:24:44.587252Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T00:24:44.588340Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [f1a7500408207d74353ed35be7ffee7a0066ed9d03c3042e8043b467d2c5a12c] <==
	2024/09/27 00:27:37 GCP Auth Webhook started!
	2024/09/27 00:27:55 Ready to marshal response ...
	2024/09/27 00:27:55 Ready to write response ...
	2024/09/27 00:27:56 Ready to marshal response ...
	2024/09/27 00:27:56 Ready to write response ...
	
	
	==> kernel <==
	 00:30:57 up  4:13,  0 users,  load average: 0.94, 1.68, 2.24
	Linux addons-376302 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [808cd2049ab3fb612ff58d4ce254cf76df8188497a44e3a97faa28c6ed3e6f58] <==
	I0927 00:28:56.359926       1 main.go:299] handling current node
	I0927 00:29:06.360032       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:29:06.360068       1 main.go:299] handling current node
	I0927 00:29:16.365246       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:29:16.365293       1 main.go:299] handling current node
	I0927 00:29:26.362756       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:29:26.362887       1 main.go:299] handling current node
	I0927 00:29:36.359547       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:29:36.359580       1 main.go:299] handling current node
	I0927 00:29:46.368603       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:29:46.368640       1 main.go:299] handling current node
	I0927 00:29:56.359139       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:29:56.359177       1 main.go:299] handling current node
	I0927 00:30:06.362570       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:30:06.362605       1 main.go:299] handling current node
	I0927 00:30:16.368136       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:30:16.368174       1 main.go:299] handling current node
	I0927 00:30:26.367275       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:30:26.367312       1 main.go:299] handling current node
	I0927 00:30:36.359655       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:30:36.359746       1 main.go:299] handling current node
	I0927 00:30:46.368266       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:30:46.368300       1 main.go:299] handling current node
	I0927 00:30:56.359173       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:30:56.359267       1 main.go:299] handling current node
	
	
	==> kube-apiserver [4f7d4f2ea82a3ba0369f64f959c96362a69a47e7aa17b2b833064d0de2690984] <==
	W0927 00:26:10.546831       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.97.35.108:443: connect: connection refused
	W0927 00:26:11.072045       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.35.108:443: connect: connection refused
	W0927 00:26:12.093798       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.35.108:443: connect: connection refused
	W0927 00:26:13.145309       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.35.108:443: connect: connection refused
	W0927 00:26:14.187134       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.35.108:443: connect: connection refused
	W0927 00:26:15.226365       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.35.108:443: connect: connection refused
	W0927 00:26:16.326299       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.35.108:443: connect: connection refused
	W0927 00:26:17.431956       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.35.108:443: connect: connection refused
	W0927 00:26:18.458763       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.35.108:443: connect: connection refused
	W0927 00:26:19.474191       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.35.108:443: connect: connection refused
	W0927 00:26:20.576808       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.35.108:443: connect: connection refused
	W0927 00:26:21.668439       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.35.108:443: connect: connection refused
	W0927 00:26:22.699934       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.35.108:443: connect: connection refused
	W0927 00:26:23.783211       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.35.108:443: connect: connection refused
	W0927 00:26:24.871087       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.35.108:443: connect: connection refused
	W0927 00:26:25.963695       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.35.108:443: connect: connection refused
	W0927 00:26:27.032206       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.35.108:443: connect: connection refused
	W0927 00:26:30.450631       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.99.251:443: connect: connection refused
	E0927 00:26:30.450671       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.99.251:443: connect: connection refused" logger="UnhandledError"
	W0927 00:27:10.491537       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.99.251:443: connect: connection refused
	E0927 00:27:10.491822       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.99.251:443: connect: connection refused" logger="UnhandledError"
	W0927 00:27:10.554317       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.99.251:443: connect: connection refused
	E0927 00:27:10.554355       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.99.251:443: connect: connection refused" logger="UnhandledError"
	I0927 00:27:55.778873       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0927 00:27:55.814578       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [4b424e70c475b5fd67848be176bc70ecc24418788697034a3eda05ed24ef077e] <==
	I0927 00:27:10.525346       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0927 00:27:10.529271       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0927 00:27:10.539181       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0927 00:27:10.562862       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 00:27:10.576423       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 00:27:10.576727       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 00:27:10.588140       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 00:27:11.489024       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0927 00:27:11.509180       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 00:27:12.619056       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0927 00:27:12.642616       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 00:27:13.625583       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0927 00:27:13.637103       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0927 00:27:13.640358       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0927 00:27:13.646992       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 00:27:13.655823       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 00:27:13.665174       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 00:27:38.586274       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="12.682859ms"
	I0927 00:27:38.586799       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="57.641µs"
	I0927 00:27:43.033179       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0927 00:27:43.038776       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0927 00:27:43.131366       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0927 00:27:43.133401       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0927 00:27:54.707140       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-376302"
	I0927 00:27:55.494127       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [70222c66f1e2ab845e54fe52f713c4cb23e920f4022cfc3188f7e96d585a68ac] <==
	I0927 00:24:55.721719       1 server_linux.go:66] "Using iptables proxy"
	I0927 00:24:56.078964       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0927 00:24:56.079042       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:24:56.207901       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0927 00:24:56.210095       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:24:56.212268       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:24:56.212604       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:24:56.212618       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:24:56.221069       1 config.go:199] "Starting service config controller"
	I0927 00:24:56.221096       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:24:56.221127       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:24:56.221132       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:24:56.221514       1 config.go:328] "Starting node config controller"
	I0927 00:24:56.221521       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:24:56.326230       1 shared_informer.go:320] Caches are synced for node config
	I0927 00:24:56.326269       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:24:56.326303       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9705ae8c567943563ca9c096643beef9dc8914c2b3480205f7fbb9c24622df4c] <==
	W0927 00:24:48.367157       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 00:24:48.367313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:24:48.367411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 00:24:48.367433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:24:48.367504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:24:48.371225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:24:48.367540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 00:24:48.371286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:24:48.367897       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 00:24:48.371320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:24:48.367952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 00:24:48.371348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:24:48.371188       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 00:24:48.371371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:24:48.371663       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 00:24:48.371695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:24:48.371774       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 00:24:48.371793       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 00:24:48.371922       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 00:24:48.371944       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:24:48.371991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:24:48.372009       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:24:48.372057       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 00:24:48.372074       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 00:24:49.956416       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:28:51 addons-376302 kubelet[1482]: I0927 00:28:51.499957    1482 scope.go:117] "RemoveContainer" containerID="df322167f54c8f6fdc7975c55ef6710315ecf1fe2a3330f47a9249d243df2298"
	Sep 27 00:28:51 addons-376302 kubelet[1482]: E0927 00:28:51.500161    1482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zmj5f_gadget(45661758-86d8-449d-bd7c-64e643ed664a)\"" pod="gadget/gadget-zmj5f" podUID="45661758-86d8-449d-bd7c-64e643ed664a"
	Sep 27 00:29:06 addons-376302 kubelet[1482]: I0927 00:29:06.499467    1482 scope.go:117] "RemoveContainer" containerID="df322167f54c8f6fdc7975c55ef6710315ecf1fe2a3330f47a9249d243df2298"
	Sep 27 00:29:06 addons-376302 kubelet[1482]: E0927 00:29:06.500245    1482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zmj5f_gadget(45661758-86d8-449d-bd7c-64e643ed664a)\"" pod="gadget/gadget-zmj5f" podUID="45661758-86d8-449d-bd7c-64e643ed664a"
	Sep 27 00:29:21 addons-376302 kubelet[1482]: I0927 00:29:21.500170    1482 scope.go:117] "RemoveContainer" containerID="df322167f54c8f6fdc7975c55ef6710315ecf1fe2a3330f47a9249d243df2298"
	Sep 27 00:29:21 addons-376302 kubelet[1482]: E0927 00:29:21.500368    1482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zmj5f_gadget(45661758-86d8-449d-bd7c-64e643ed664a)\"" pod="gadget/gadget-zmj5f" podUID="45661758-86d8-449d-bd7c-64e643ed664a"
	Sep 27 00:29:23 addons-376302 kubelet[1482]: I0927 00:29:23.500079    1482 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-pkjkw" secret="" err="secret \"gcp-auth\" not found"
	Sep 27 00:29:33 addons-376302 kubelet[1482]: I0927 00:29:33.499876    1482 scope.go:117] "RemoveContainer" containerID="df322167f54c8f6fdc7975c55ef6710315ecf1fe2a3330f47a9249d243df2298"
	Sep 27 00:29:33 addons-376302 kubelet[1482]: E0927 00:29:33.500493    1482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zmj5f_gadget(45661758-86d8-449d-bd7c-64e643ed664a)\"" pod="gadget/gadget-zmj5f" podUID="45661758-86d8-449d-bd7c-64e643ed664a"
	Sep 27 00:29:37 addons-376302 kubelet[1482]: I0927 00:29:37.499788    1482 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-nxrj2" secret="" err="secret \"gcp-auth\" not found"
	Sep 27 00:29:46 addons-376302 kubelet[1482]: I0927 00:29:46.500588    1482 scope.go:117] "RemoveContainer" containerID="df322167f54c8f6fdc7975c55ef6710315ecf1fe2a3330f47a9249d243df2298"
	Sep 27 00:29:46 addons-376302 kubelet[1482]: E0927 00:29:46.501266    1482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zmj5f_gadget(45661758-86d8-449d-bd7c-64e643ed664a)\"" pod="gadget/gadget-zmj5f" podUID="45661758-86d8-449d-bd7c-64e643ed664a"
	Sep 27 00:29:54 addons-376302 kubelet[1482]: I0927 00:29:54.499472    1482 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-wjw6j" secret="" err="secret \"gcp-auth\" not found"
	Sep 27 00:29:58 addons-376302 kubelet[1482]: I0927 00:29:58.500400    1482 scope.go:117] "RemoveContainer" containerID="df322167f54c8f6fdc7975c55ef6710315ecf1fe2a3330f47a9249d243df2298"
	Sep 27 00:29:58 addons-376302 kubelet[1482]: E0927 00:29:58.501026    1482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zmj5f_gadget(45661758-86d8-449d-bd7c-64e643ed664a)\"" pod="gadget/gadget-zmj5f" podUID="45661758-86d8-449d-bd7c-64e643ed664a"
	Sep 27 00:30:11 addons-376302 kubelet[1482]: I0927 00:30:11.499826    1482 scope.go:117] "RemoveContainer" containerID="df322167f54c8f6fdc7975c55ef6710315ecf1fe2a3330f47a9249d243df2298"
	Sep 27 00:30:11 addons-376302 kubelet[1482]: E0927 00:30:11.500043    1482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zmj5f_gadget(45661758-86d8-449d-bd7c-64e643ed664a)\"" pod="gadget/gadget-zmj5f" podUID="45661758-86d8-449d-bd7c-64e643ed664a"
	Sep 27 00:30:24 addons-376302 kubelet[1482]: I0927 00:30:24.499340    1482 scope.go:117] "RemoveContainer" containerID="df322167f54c8f6fdc7975c55ef6710315ecf1fe2a3330f47a9249d243df2298"
	Sep 27 00:30:24 addons-376302 kubelet[1482]: E0927 00:30:24.499975    1482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zmj5f_gadget(45661758-86d8-449d-bd7c-64e643ed664a)\"" pod="gadget/gadget-zmj5f" podUID="45661758-86d8-449d-bd7c-64e643ed664a"
	Sep 27 00:30:25 addons-376302 kubelet[1482]: I0927 00:30:25.499643    1482 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-pkjkw" secret="" err="secret \"gcp-auth\" not found"
	Sep 27 00:30:39 addons-376302 kubelet[1482]: I0927 00:30:39.499507    1482 scope.go:117] "RemoveContainer" containerID="df322167f54c8f6fdc7975c55ef6710315ecf1fe2a3330f47a9249d243df2298"
	Sep 27 00:30:39 addons-376302 kubelet[1482]: E0927 00:30:39.499702    1482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zmj5f_gadget(45661758-86d8-449d-bd7c-64e643ed664a)\"" pod="gadget/gadget-zmj5f" podUID="45661758-86d8-449d-bd7c-64e643ed664a"
	Sep 27 00:30:49 addons-376302 kubelet[1482]: I0927 00:30:49.499310    1482 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-nxrj2" secret="" err="secret \"gcp-auth\" not found"
	Sep 27 00:30:54 addons-376302 kubelet[1482]: I0927 00:30:54.500385    1482 scope.go:117] "RemoveContainer" containerID="df322167f54c8f6fdc7975c55ef6710315ecf1fe2a3330f47a9249d243df2298"
	Sep 27 00:30:54 addons-376302 kubelet[1482]: E0927 00:30:54.500578    1482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zmj5f_gadget(45661758-86d8-449d-bd7c-64e643ed664a)\"" pod="gadget/gadget-zmj5f" podUID="45661758-86d8-449d-bd7c-64e643ed664a"
	
	
	==> storage-provisioner [886eea20089a6051829661f7e8c9393ffc3db7c998aa37bb63252786d665a098] <==
	I0927 00:25:01.229767       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 00:25:01.257267       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 00:25:01.257355       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 00:25:01.273284       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 00:25:01.273493       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-376302_5b24d069-04ae-4ef3-a6f2-a1c0937b1b29!
	I0927 00:25:01.274328       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7907ac98-ddd6-45db-84b2-74828e2a9989", APIVersion:"v1", ResourceVersion:"554", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-376302_5b24d069-04ae-4ef3-a6f2-a1c0937b1b29 became leader
	I0927 00:25:01.374027       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-376302_5b24d069-04ae-4ef3-a6f2-a1c0937b1b29!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-376302 -n addons-376302
helpers_test.go:261: (dbg) Run:  kubectl --context addons-376302 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-hrlmc ingress-nginx-admission-patch-rv4dz test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-376302 describe pod ingress-nginx-admission-create-hrlmc ingress-nginx-admission-patch-rv4dz test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-376302 describe pod ingress-nginx-admission-create-hrlmc ingress-nginx-admission-patch-rv4dz test-job-nginx-0: exit status 1 (93.42242ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hrlmc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rv4dz" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-376302 describe pod ingress-nginx-admission-create-hrlmc ingress-nginx-admission-patch-rv4dz test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.76s)

                                                
                                    

Test pass (299/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.65
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 5.54
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.22
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.54
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 215.2
31 TestAddons/serial/GCPAuth/Namespaces 0.18
33 TestAddons/parallel/Registry 15.41
34 TestAddons/parallel/Ingress 17.94
35 TestAddons/parallel/InspektorGadget 12.07
36 TestAddons/parallel/MetricsServer 5.79
38 TestAddons/parallel/CSI 47.81
39 TestAddons/parallel/Headlamp 13.29
40 TestAddons/parallel/CloudSpanner 6.57
41 TestAddons/parallel/LocalPath 52.04
42 TestAddons/parallel/NvidiaDevicePlugin 6.55
43 TestAddons/parallel/Yakd 10.85
44 TestAddons/StoppedEnableDisable 12.31
45 TestCertOptions 42.66
46 TestCertExpiration 231.79
48 TestForceSystemdFlag 32.19
49 TestForceSystemdEnv 35.94
50 TestDockerEnvContainerd 45.95
55 TestErrorSpam/setup 30.8
56 TestErrorSpam/start 0.77
57 TestErrorSpam/status 1
58 TestErrorSpam/pause 1.72
59 TestErrorSpam/unpause 1.88
60 TestErrorSpam/stop 1.44
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 51.83
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 6.01
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.09
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.13
72 TestFunctional/serial/CacheCmd/cache/add_local 1.39
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 2
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
80 TestFunctional/serial/ExtraConfig 45.02
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.67
83 TestFunctional/serial/LogsFileCmd 1.67
84 TestFunctional/serial/InvalidService 4.98
86 TestFunctional/parallel/ConfigCmd 0.43
87 TestFunctional/parallel/DashboardCmd 10.13
88 TestFunctional/parallel/DryRun 0.51
89 TestFunctional/parallel/InternationalLanguage 0.2
90 TestFunctional/parallel/StatusCmd 1.01
94 TestFunctional/parallel/ServiceCmdConnect 8.57
95 TestFunctional/parallel/AddonsCmd 0.14
96 TestFunctional/parallel/PersistentVolumeClaim 24.19
98 TestFunctional/parallel/SSHCmd 0.53
99 TestFunctional/parallel/CpCmd 1.86
101 TestFunctional/parallel/FileSync 0.35
102 TestFunctional/parallel/CertSync 2.26
106 TestFunctional/parallel/NodeLabels 0.09
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.64
110 TestFunctional/parallel/License 0.28
111 TestFunctional/parallel/Version/short 0.07
112 TestFunctional/parallel/Version/components 1.33
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
117 TestFunctional/parallel/ImageCommands/ImageBuild 3.66
118 TestFunctional/parallel/ImageCommands/Setup 0.79
119 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
120 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
121 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.62
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.37
124 TestFunctional/parallel/ServiceCmd/DeployApp 10.23
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.58
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
127 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
131 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.34
135 TestFunctional/parallel/ServiceCmd/List 0.32
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
138 TestFunctional/parallel/ServiceCmd/Format 0.36
139 TestFunctional/parallel/ServiceCmd/URL 0.35
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
146 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
147 TestFunctional/parallel/ProfileCmd/profile_list 0.42
148 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
149 TestFunctional/parallel/MountCmd/any-port 7.83
150 TestFunctional/parallel/MountCmd/specific-port 1.95
151 TestFunctional/parallel/MountCmd/VerifyCleanup 2.23
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.01
158 TestMultiControlPlane/serial/StartCluster 116.57
159 TestMultiControlPlane/serial/DeployApp 32.47
160 TestMultiControlPlane/serial/PingHostFromPods 1.65
161 TestMultiControlPlane/serial/AddWorkerNode 23.16
162 TestMultiControlPlane/serial/NodeLabels 0.11
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.01
164 TestMultiControlPlane/serial/CopyFile 18.79
165 TestMultiControlPlane/serial/StopSecondaryNode 12.89
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.77
167 TestMultiControlPlane/serial/RestartSecondaryNode 19.17
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.09
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 131.68
170 TestMultiControlPlane/serial/DeleteSecondaryNode 10.59
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.73
172 TestMultiControlPlane/serial/StopCluster 36.05
173 TestMultiControlPlane/serial/RestartCluster 78.62
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
175 TestMultiControlPlane/serial/AddSecondaryNode 45.55
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.98
180 TestJSONOutput/start/Command 85.36
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.75
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.65
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.76
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.21
205 TestKicCustomNetwork/create_custom_network 41.29
206 TestKicCustomNetwork/use_default_bridge_network 33.74
207 TestKicExistingNetwork 32.18
208 TestKicCustomSubnet 32.68
209 TestKicStaticIP 36.15
210 TestMainNoArgs 0.1
211 TestMinikubeProfile 64.35
214 TestMountStart/serial/StartWithMountFirst 6.21
215 TestMountStart/serial/VerifyMountFirst 0.25
216 TestMountStart/serial/StartWithMountSecond 5.92
217 TestMountStart/serial/VerifyMountSecond 0.26
218 TestMountStart/serial/DeleteFirst 1.61
219 TestMountStart/serial/VerifyMountPostDelete 0.26
220 TestMountStart/serial/Stop 1.22
221 TestMountStart/serial/RestartStopped 7.57
222 TestMountStart/serial/VerifyMountPostStop 0.25
225 TestMultiNode/serial/FreshStart2Nodes 64.37
226 TestMultiNode/serial/DeployApp2Nodes 19.1
227 TestMultiNode/serial/PingHostFrom2Pods 1
228 TestMultiNode/serial/AddNode 16.71
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.68
231 TestMultiNode/serial/CopyFile 10.21
232 TestMultiNode/serial/StopNode 2.29
233 TestMultiNode/serial/StartAfterStop 9.68
234 TestMultiNode/serial/RestartKeepsNodes 98.65
235 TestMultiNode/serial/DeleteNode 5.48
236 TestMultiNode/serial/StopMultiNode 24.02
237 TestMultiNode/serial/RestartMultiNode 50.67
238 TestMultiNode/serial/ValidateNameConflict 34.17
243 TestPreload 120.8
245 TestScheduledStopUnix 106.93
248 TestInsufficientStorage 10.22
249 TestRunningBinaryUpgrade 82.61
251 TestKubernetesUpgrade 350.59
252 TestMissingContainerUpgrade 193.84
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
255 TestNoKubernetes/serial/StartWithK8s 41.37
256 TestNoKubernetes/serial/StartWithStopK8s 8.83
257 TestNoKubernetes/serial/Start 9.63
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
259 TestNoKubernetes/serial/ProfileList 0.95
260 TestNoKubernetes/serial/Stop 1.23
261 TestNoKubernetes/serial/StartNoArgs 6.58
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
263 TestStoppedBinaryUpgrade/Setup 0.89
264 TestStoppedBinaryUpgrade/Upgrade 103.02
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.05
274 TestPause/serial/Start 99.55
282 TestNetworkPlugins/group/false 3.59
286 TestPause/serial/SecondStartNoReconfiguration 7.49
287 TestPause/serial/Pause 1.1
288 TestPause/serial/VerifyStatus 0.5
289 TestPause/serial/Unpause 0.91
290 TestPause/serial/PauseAgain 1.1
291 TestPause/serial/DeletePaused 3.09
292 TestPause/serial/VerifyDeletedResources 0.79
294 TestStartStop/group/old-k8s-version/serial/FirstStart 148.07
295 TestStartStop/group/old-k8s-version/serial/DeployApp 10.53
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.1
297 TestStartStop/group/old-k8s-version/serial/Stop 12.09
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
299 TestStartStop/group/old-k8s-version/serial/SecondStart 154.52
301 TestStartStop/group/no-preload/serial/FirstStart 68.23
302 TestStartStop/group/no-preload/serial/DeployApp 9.37
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
304 TestStartStop/group/no-preload/serial/Stop 12.05
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
306 TestStartStop/group/no-preload/serial/SecondStart 290.45
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.47
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
310 TestStartStop/group/old-k8s-version/serial/Pause 3.55
312 TestStartStop/group/embed-certs/serial/FirstStart 53.01
313 TestStartStop/group/embed-certs/serial/DeployApp 9.36
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.15
315 TestStartStop/group/embed-certs/serial/Stop 12.05
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
317 TestStartStop/group/embed-certs/serial/SecondStart 266.56
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
321 TestStartStop/group/no-preload/serial/Pause 3.05
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.19
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.36
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.26
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 271.75
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
332 TestStartStop/group/embed-certs/serial/Pause 3.73
334 TestStartStop/group/newest-cni/serial/FirstStart 38.58
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.27
337 TestStartStop/group/newest-cni/serial/Stop 1.27
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
339 TestStartStop/group/newest-cni/serial/SecondStart 16.06
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
343 TestStartStop/group/newest-cni/serial/Pause 3.03
344 TestNetworkPlugins/group/auto/Start 91.59
345 TestNetworkPlugins/group/auto/KubeletFlags 0.29
346 TestNetworkPlugins/group/auto/NetCatPod 9.3
347 TestNetworkPlugins/group/auto/DNS 0.2
348 TestNetworkPlugins/group/auto/Localhost 0.17
349 TestNetworkPlugins/group/auto/HairPin 0.16
350 TestNetworkPlugins/group/flannel/Start 48.65
351 TestNetworkPlugins/group/flannel/ControllerPod 6.01
352 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
353 TestNetworkPlugins/group/flannel/NetCatPod 9.27
354 TestNetworkPlugins/group/flannel/DNS 0.17
355 TestNetworkPlugins/group/flannel/Localhost 0.16
356 TestNetworkPlugins/group/flannel/HairPin 0.15
357 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
358 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
359 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
360 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.13
361 TestNetworkPlugins/group/calico/Start 73.92
362 TestNetworkPlugins/group/custom-flannel/Start 62.86
363 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
364 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.28
365 TestNetworkPlugins/group/calico/ControllerPod 6.01
366 TestNetworkPlugins/group/calico/KubeletFlags 0.29
367 TestNetworkPlugins/group/calico/NetCatPod 9.3
368 TestNetworkPlugins/group/custom-flannel/DNS 0.29
369 TestNetworkPlugins/group/custom-flannel/Localhost 0.26
370 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
371 TestNetworkPlugins/group/calico/DNS 0.25
372 TestNetworkPlugins/group/calico/Localhost 0.3
373 TestNetworkPlugins/group/calico/HairPin 0.2
374 TestNetworkPlugins/group/kindnet/Start 92.92
375 TestNetworkPlugins/group/bridge/Start 51.91
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
377 TestNetworkPlugins/group/bridge/NetCatPod 9.3
378 TestNetworkPlugins/group/bridge/DNS 0.19
379 TestNetworkPlugins/group/bridge/Localhost 0.15
380 TestNetworkPlugins/group/bridge/HairPin 0.16
381 TestNetworkPlugins/group/enable-default-cni/Start 75.84
382 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
383 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
384 TestNetworkPlugins/group/kindnet/NetCatPod 10.32
385 TestNetworkPlugins/group/kindnet/DNS 0.23
386 TestNetworkPlugins/group/kindnet/Localhost 0.16
387 TestNetworkPlugins/group/kindnet/HairPin 0.19
388 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
389 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.26
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (9.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-607949 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-607949 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.652333407s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0927 00:23:56.251409  589083 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0927 00:23:56.251497  589083 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-583677/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-607949
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-607949: exit status 85 (68.843937ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-607949 | jenkins | v1.34.0 | 27 Sep 24 00:23 UTC |          |
	|         | -p download-only-607949        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:23:46
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:23:46.644683  589088 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:23:46.644871  589088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:23:46.644883  589088 out.go:358] Setting ErrFile to fd 2...
	I0927 00:23:46.644890  589088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:23:46.645179  589088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-583677/.minikube/bin
	W0927 00:23:46.645352  589088 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19711-583677/.minikube/config/config.json: open /home/jenkins/minikube-integration/19711-583677/.minikube/config/config.json: no such file or directory
	I0927 00:23:46.645810  589088 out.go:352] Setting JSON to true
	I0927 00:23:46.646799  589088 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14761,"bootTime":1727381865,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0927 00:23:46.646886  589088 start.go:139] virtualization:  
	I0927 00:23:46.649945  589088 out.go:97] [download-only-607949] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0927 00:23:46.650168  589088 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19711-583677/.minikube/cache/preloaded-tarball: no such file or directory
	I0927 00:23:46.650225  589088 notify.go:220] Checking for updates...
	I0927 00:23:46.653121  589088 out.go:169] MINIKUBE_LOCATION=19711
	I0927 00:23:46.655112  589088 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:23:46.657311  589088 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19711-583677/kubeconfig
	I0927 00:23:46.659211  589088 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-583677/.minikube
	I0927 00:23:46.661105  589088 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0927 00:23:46.664724  589088 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 00:23:46.664985  589088 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:23:46.694568  589088 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:23:46.694671  589088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:23:46.759237  589088 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 00:23:46.742091738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:23:46.759345  589088 docker.go:318] overlay module found
	I0927 00:23:46.761372  589088 out.go:97] Using the docker driver based on user configuration
	I0927 00:23:46.761410  589088 start.go:297] selected driver: docker
	I0927 00:23:46.761417  589088 start.go:901] validating driver "docker" against <nil>
	I0927 00:23:46.761520  589088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:23:46.821326  589088 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 00:23:46.810901083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:23:46.821622  589088 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:23:46.821990  589088 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0927 00:23:46.822194  589088 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 00:23:46.824100  589088 out.go:169] Using Docker driver with root privileges
	I0927 00:23:46.825793  589088 cni.go:84] Creating CNI manager for ""
	I0927 00:23:46.825870  589088 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0927 00:23:46.825883  589088 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 00:23:46.825964  589088 start.go:340] cluster config:
	{Name:download-only-607949 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-607949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:23:46.827843  589088 out.go:97] Starting "download-only-607949" primary control-plane node in "download-only-607949" cluster
	I0927 00:23:46.827878  589088 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0927 00:23:46.829735  589088 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0927 00:23:46.829780  589088 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0927 00:23:46.829877  589088 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 00:23:46.845845  589088 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 00:23:46.846542  589088 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 00:23:46.846653  589088 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 00:23:46.895640  589088 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0927 00:23:46.895680  589088 cache.go:56] Caching tarball of preloaded images
	I0927 00:23:46.896254  589088 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0927 00:23:46.898246  589088 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0927 00:23:46.898270  589088 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0927 00:23:46.985238  589088 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19711-583677/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0927 00:23:51.401826  589088 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0927 00:23:51.402087  589088 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19711-583677/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-607949 host does not exist
	  To start a cluster, run: "minikube start -p download-only-607949"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-607949
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-981151 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-981151 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.537973422s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0927 00:24:02.187272  589083 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I0927 00:24:02.187310  589083 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-583677/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-981151
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-981151: exit status 85 (70.169576ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-607949 | jenkins | v1.34.0 | 27 Sep 24 00:23 UTC |                     |
	|         | -p download-only-607949        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 27 Sep 24 00:23 UTC | 27 Sep 24 00:23 UTC |
	| delete  | -p download-only-607949        | download-only-607949 | jenkins | v1.34.0 | 27 Sep 24 00:23 UTC | 27 Sep 24 00:23 UTC |
	| start   | -o=json --download-only        | download-only-981151 | jenkins | v1.34.0 | 27 Sep 24 00:23 UTC |                     |
	|         | -p download-only-981151        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:23:56
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:23:56.692409  589287 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:23:56.692632  589287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:23:56.692658  589287 out.go:358] Setting ErrFile to fd 2...
	I0927 00:23:56.692677  589287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:23:56.692946  589287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-583677/.minikube/bin
	I0927 00:23:56.693375  589287 out.go:352] Setting JSON to true
	I0927 00:23:56.694336  589287 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14772,"bootTime":1727381865,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0927 00:23:56.694432  589287 start.go:139] virtualization:  
	I0927 00:23:56.697143  589287 out.go:97] [download-only-981151] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 00:23:56.697330  589287 notify.go:220] Checking for updates...
	I0927 00:23:56.699354  589287 out.go:169] MINIKUBE_LOCATION=19711
	I0927 00:23:56.701173  589287 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:23:56.703228  589287 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19711-583677/kubeconfig
	I0927 00:23:56.705041  589287 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-583677/.minikube
	I0927 00:23:56.706688  589287 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0927 00:23:56.710113  589287 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 00:23:56.710443  589287 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:23:56.738588  589287 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:23:56.738705  589287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:23:56.791910  589287 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-27 00:23:56.780050787 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:23:56.792018  589287 docker.go:318] overlay module found
	I0927 00:23:56.793966  589287 out.go:97] Using the docker driver based on user configuration
	I0927 00:23:56.793998  589287 start.go:297] selected driver: docker
	I0927 00:23:56.794005  589287 start.go:901] validating driver "docker" against <nil>
	I0927 00:23:56.794116  589287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:23:56.844276  589287 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-27 00:23:56.834756418 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:23:56.844485  589287 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:23:56.844757  589287 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0927 00:23:56.844923  589287 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 00:23:56.846959  589287 out.go:169] Using Docker driver with root privileges
	I0927 00:23:56.848777  589287 cni.go:84] Creating CNI manager for ""
	I0927 00:23:56.848849  589287 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0927 00:23:56.848862  589287 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 00:23:56.848946  589287 start.go:340] cluster config:
	{Name:download-only-981151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-981151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:23:56.851107  589287 out.go:97] Starting "download-only-981151" primary control-plane node in "download-only-981151" cluster
	I0927 00:23:56.851126  589287 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0927 00:23:56.853076  589287 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0927 00:23:56.853109  589287 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0927 00:23:56.853151  589287 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 00:23:56.868359  589287 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 00:23:56.868478  589287 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 00:23:56.868497  589287 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0927 00:23:56.868503  589287 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0927 00:23:56.868511  589287 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0927 00:23:56.918052  589287 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0927 00:23:56.918084  589287 cache.go:56] Caching tarball of preloaded images
	I0927 00:23:56.918241  589287 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0927 00:23:56.920473  589287 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0927 00:23:56.920491  589287 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0927 00:23:57.013739  589287 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19711-583677/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0927 00:24:00.334935  589287 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0927 00:24:00.335156  589287 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19711-583677/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0927 00:24:01.265766  589287 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0927 00:24:01.266185  589287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/download-only-981151/config.json ...
	I0927 00:24:01.266221  589287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/download-only-981151/config.json: {Name:mk2370f211626efac51f647674f585fa78ce7a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:24:01.266880  589287 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0927 00:24:01.267403  589287 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19711-583677/.minikube/cache/linux/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-981151 host does not exist
	  To start a cluster, run: "minikube start -p download-only-981151"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-981151
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
I0927 00:24:03.426991  589083 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-548949 --alsologtostderr --binary-mirror http://127.0.0.1:37853 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-548949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-548949
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-376302
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-376302: exit status 85 (84.056157ms)

                                                
                                                
-- stdout --
	* Profile "addons-376302" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-376302"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-376302
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-376302: exit status 85 (79.558353ms)

                                                
                                                
-- stdout --
	* Profile "addons-376302" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-376302"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (215.2s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-376302 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-376302 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m35.198498786s)
--- PASS: TestAddons/Setup (215.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-376302 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-376302 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.884205ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-pkjkw" [5f4663c9-a883-4664-8c1a-10bd6625888a] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004809491s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nxrj2" [33cbf2e0-38da-4675-b8c1-d7be72de0161] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004190998s
addons_test.go:338: (dbg) Run:  kubectl --context addons-376302 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-376302 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Done: kubectl --context addons-376302 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.364661586s)
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-376302 ip
2024/09/27 00:31:32 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-376302 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.41s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-376302 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-376302 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-376302 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d8017de8-d49b-407f-bed0-e119febf0f35] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d8017de8-d49b-407f-bed0-e119febf0f35] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.003788844s
I0927 00:32:45.379425  589083 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-376302 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-376302 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-376302 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-376302 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-376302 addons disable ingress-dns --alsologtostderr -v=1: (1.348898454s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-376302 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-376302 addons disable ingress --alsologtostderr -v=1: (7.967036084s)
--- PASS: TestAddons/parallel/Ingress (17.94s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.07s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-zmj5f" [45661758-86d8-449d-bd7c-64e643ed664a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003775325s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-376302
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-376302: (6.06808475s)
--- PASS: TestAddons/parallel/InspektorGadget (12.07s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.997361ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-vtnnr" [bbb2738a-95e1-47e4-b01e-618faac704d5] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004031776s
addons_test.go:413: (dbg) Run:  kubectl --context addons-376302 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-376302 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0927 00:31:28.817993  589083 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0927 00:31:28.823213  589083 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0927 00:31:28.823242  589083 kapi.go:107] duration metric: took 7.848712ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 7.858189ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-376302 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-376302 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [24dff81a-99be-4bc6-9031-1b83037f966e] Pending
helpers_test.go:344: "task-pv-pod" [24dff81a-99be-4bc6-9031-1b83037f966e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [24dff81a-99be-4bc6-9031-1b83037f966e] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003713245s
addons_test.go:528: (dbg) Run:  kubectl --context addons-376302 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-376302 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-376302 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-376302 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-376302 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-376302 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-376302 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8524af6c-33e0-4e42-91db-f97657914e36] Pending
helpers_test.go:344: "task-pv-pod-restore" [8524af6c-33e0-4e42-91db-f97657914e36] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8524af6c-33e0-4e42-91db-f97657914e36] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003679921s
addons_test.go:570: (dbg) Run:  kubectl --context addons-376302 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-376302 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-376302 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-376302 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-376302 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.788252678s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-376302 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.81s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-376302 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-376302 --alsologtostderr -v=1: (1.011694839s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-nkcbg" [625e85f2-c689-410f-b408-30944e1be60c] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-nkcbg" [625e85f2-c689-410f-b408-30944e1be60c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-nkcbg" [625e85f2-c689-410f-b408-30944e1be60c] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003588419s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-376302 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (13.29s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-ml7ps" [404e96ba-f17b-443c-8cde-4bc6756a9f09] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.002691532s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-376302
--- PASS: TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-376302 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-376302 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376302 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [6ea4a3f2-d6f9-4a3b-9c4e-cd4188f3d4e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [6ea4a3f2-d6f9-4a3b-9c4e-cd4188f3d4e1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [6ea4a3f2-d6f9-4a3b-9c4e-cd4188f3d4e1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003989417s
addons_test.go:938: (dbg) Run:  kubectl --context addons-376302 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-376302 ssh "cat /opt/local-path-provisioner/pvc-0db0b7ec-effe-43fa-aa65-11e88339425d_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-376302 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-376302 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-376302 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-376302 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.722919214s)
--- PASS: TestAddons/parallel/LocalPath (52.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-wjw6j" [473f336e-2969-4568-9fc5-c53cc41f9fb9] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003450836s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-376302
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-xlrnc" [e2f0ad5f-a44b-4c1f-a78d-2bde84d395d3] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005721938s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-376302 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-376302 addons disable yakd --alsologtostderr -v=1: (5.840317648s)
--- PASS: TestAddons/parallel/Yakd (10.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-376302
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-376302: (12.044557737s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-376302
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-376302
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-376302
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (42.66s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-181316 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E0927 01:10:42.345211  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-181316 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (39.621198025s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-181316 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-181316 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-181316 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-181316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-181316
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-181316: (2.210359742s)
--- PASS: TestCertOptions (42.66s)

                                                
                                    
x
+
TestCertExpiration (231.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-551155 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-551155 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (41.63318035s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-551155 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-551155 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.871893561s)
helpers_test.go:175: Cleaning up "cert-expiration-551155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-551155
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-551155: (2.285670972s)
--- PASS: TestCertExpiration (231.79s)

                                                
                                    
x
+
TestForceSystemdFlag (32.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-914053 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-914053 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (29.846029755s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-914053 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-914053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-914053
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-914053: (2.008361809s)
--- PASS: TestForceSystemdFlag (32.19s)

                                                
                                    
x
+
TestForceSystemdEnv (35.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-649776 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-649776 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.316496127s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-649776 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-649776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-649776
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-649776: (2.194796428s)
--- PASS: TestForceSystemdEnv (35.94s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.95s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-496765 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-496765 --driver=docker  --container-runtime=containerd: (30.345006355s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-496765"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-496765": (1.002857163s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Rc3RrmWbrDfq/agent.609392" SSH_AGENT_PID="609393" DOCKER_HOST=ssh://docker@127.0.0.1:33514 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Rc3RrmWbrDfq/agent.609392" SSH_AGENT_PID="609393" DOCKER_HOST=ssh://docker@127.0.0.1:33514 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Rc3RrmWbrDfq/agent.609392" SSH_AGENT_PID="609393" DOCKER_HOST=ssh://docker@127.0.0.1:33514 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.214671678s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Rc3RrmWbrDfq/agent.609392" SSH_AGENT_PID="609393" DOCKER_HOST=ssh://docker@127.0.0.1:33514 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-496765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-496765
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-496765: (1.953771069s)
--- PASS: TestDockerEnvContainerd (45.95s)

                                                
                                    
x
+
TestErrorSpam/setup (30.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-965440 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-965440 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-965440 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-965440 --driver=docker  --container-runtime=containerd: (30.795023098s)
--- PASS: TestErrorSpam/setup (30.80s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965440 --log_dir /tmp/nospam-965440 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965440 --log_dir /tmp/nospam-965440 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965440 --log_dir /tmp/nospam-965440 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965440 --log_dir /tmp/nospam-965440 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965440 --log_dir /tmp/nospam-965440 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965440 --log_dir /tmp/nospam-965440 status
--- PASS: TestErrorSpam/status (1.00s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965440 --log_dir /tmp/nospam-965440 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965440 --log_dir /tmp/nospam-965440 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965440 --log_dir /tmp/nospam-965440 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.88s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965440 --log_dir /tmp/nospam-965440 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965440 --log_dir /tmp/nospam-965440 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965440 --log_dir /tmp/nospam-965440 unpause
--- PASS: TestErrorSpam/unpause (1.88s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965440 --log_dir /tmp/nospam-965440 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-965440 --log_dir /tmp/nospam-965440 stop: (1.253390065s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965440 --log_dir /tmp/nospam-965440 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-965440 --log_dir /tmp/nospam-965440 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19711-583677/.minikube/files/etc/test/nested/copy/589083/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.83s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-775062 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-775062 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (51.824830789s)
--- PASS: TestFunctional/serial/StartWithProxy (51.83s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0927 00:35:34.472895  589083 config.go:182] Loaded profile config "functional-775062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-775062 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-775062 --alsologtostderr -v=8: (6.006495534s)
functional_test.go:663: soft start took 6.008081767s for "functional-775062" cluster.
I0927 00:35:40.479719  589083 config.go:182] Loaded profile config "functional-775062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (6.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-775062 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-775062 cache add registry.k8s.io/pause:3.1: (1.609048371s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-775062 cache add registry.k8s.io/pause:3.3: (1.409103708s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-775062 cache add registry.k8s.io/pause:latest: (1.107812483s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-775062 /tmp/TestFunctionalserialCacheCmdcacheadd_local3058013043/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 cache add minikube-local-cache-test:functional-775062
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 cache delete minikube-local-cache-test:functional-775062
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-775062
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-775062 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (280.73641ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-775062 cache reload: (1.124656043s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 kubectl -- --context functional-775062 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-775062 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-775062 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-775062 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.022990961s)
functional_test.go:761: restart took 45.023096569s for "functional-775062" cluster.
I0927 00:36:33.965835  589083 config.go:182] Loaded profile config "functional-775062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (45.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-775062 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-775062 logs: (1.66880857s)
--- PASS: TestFunctional/serial/LogsCmd (1.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 logs --file /tmp/TestFunctionalserialLogsFileCmd631262276/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-775062 logs --file /tmp/TestFunctionalserialLogsFileCmd631262276/001/logs.txt: (1.668916531s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.67s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.98s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-775062 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-775062
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-775062: exit status 115 (602.990201ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32029 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-775062 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-775062 delete -f testdata/invalidsvc.yaml: (1.122502111s)
--- PASS: TestFunctional/serial/InvalidService (4.98s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-775062 config get cpus: exit status 14 (73.028877ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-775062 config get cpus: exit status 14 (74.261167ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-775062 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-775062 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 626196: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.13s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-775062 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-775062 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (294.944522ms)

                                                
                                                
-- stdout --
	* [functional-775062] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-583677/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-583677/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:37:23.195746  625571 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:37:23.195870  625571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:37:23.195875  625571 out.go:358] Setting ErrFile to fd 2...
	I0927 00:37:23.195881  625571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:37:23.196123  625571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-583677/.minikube/bin
	I0927 00:37:23.199561  625571 out.go:352] Setting JSON to false
	I0927 00:37:23.200619  625571 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15578,"bootTime":1727381865,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0927 00:37:23.200701  625571 start.go:139] virtualization:  
	I0927 00:37:23.203217  625571 out.go:177] * [functional-775062] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 00:37:23.205410  625571 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:37:23.205568  625571 notify.go:220] Checking for updates...
	I0927 00:37:23.209098  625571 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:37:23.211325  625571 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-583677/kubeconfig
	I0927 00:37:23.213233  625571 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-583677/.minikube
	I0927 00:37:23.216514  625571 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 00:37:23.220029  625571 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:37:23.222816  625571 config.go:182] Loaded profile config "functional-775062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 00:37:23.223540  625571 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:37:23.263588  625571 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:37:23.263725  625571 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:37:23.360542  625571 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 00:37:23.350266179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:37:23.360655  625571 docker.go:318] overlay module found
	I0927 00:37:23.362758  625571 out.go:177] * Using the docker driver based on existing profile
	I0927 00:37:23.365357  625571 start.go:297] selected driver: docker
	I0927 00:37:23.365380  625571 start.go:901] validating driver "docker" against &{Name:functional-775062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-775062 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:37:23.365502  625571 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:37:23.367785  625571 out.go:201] 
	W0927 00:37:23.369460  625571 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0927 00:37:23.371243  625571 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-775062 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-775062 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-775062 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (204.4399ms)

                                                
                                                
-- stdout --
	* [functional-775062] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-583677/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-583677/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:37:23.645545  625749 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:37:23.645743  625749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:37:23.645770  625749 out.go:358] Setting ErrFile to fd 2...
	I0927 00:37:23.645790  625749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:37:23.646761  625749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-583677/.minikube/bin
	I0927 00:37:23.647225  625749 out.go:352] Setting JSON to false
	I0927 00:37:23.648265  625749 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15578,"bootTime":1727381865,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0927 00:37:23.648374  625749 start.go:139] virtualization:  
	I0927 00:37:23.650552  625749 out.go:177] * [functional-775062] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0927 00:37:23.653154  625749 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:37:23.653237  625749 notify.go:220] Checking for updates...
	I0927 00:37:23.656983  625749 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:37:23.658723  625749 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-583677/kubeconfig
	I0927 00:37:23.660354  625749 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-583677/.minikube
	I0927 00:37:23.662369  625749 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 00:37:23.664364  625749 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:37:23.666845  625749 config.go:182] Loaded profile config "functional-775062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 00:37:23.667554  625749 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:37:23.696154  625749 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:37:23.696266  625749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:37:23.780104  625749 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 00:37:23.770027465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:37:23.780213  625749 docker.go:318] overlay module found
	I0927 00:37:23.782606  625749 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0927 00:37:23.784534  625749 start.go:297] selected driver: docker
	I0927 00:37:23.784552  625749 start.go:901] validating driver "docker" against &{Name:functional-775062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-775062 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:37:23.784653  625749 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:37:23.786862  625749 out.go:201] 
	W0927 00:37:23.788950  625749 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0927 00:37:23.793157  625749 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-775062 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-775062 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-t4kcw" [4a38eb1b-2396-4f95-b542-70f8a490de19] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-t4kcw" [4a38eb1b-2396-4f95-b542-70f8a490de19] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003559202s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32567
functional_test.go:1675: http://192.168.49.2:32567: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-t4kcw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32567
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [819b1781-b44f-4d0d-90a4-fe00a998a30c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003831799s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-775062 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-775062 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-775062 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-775062 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [85cdfae2-0621-4ee8-a7ba-d4352a7d1f9f] Pending
helpers_test.go:344: "sp-pod" [85cdfae2-0621-4ee8-a7ba-d4352a7d1f9f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [85cdfae2-0621-4ee8-a7ba-d4352a7d1f9f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004098013s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-775062 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-775062 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-775062 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7bf944af-b644-44ea-9a01-3e340eabd34d] Pending
helpers_test.go:344: "sp-pod" [7bf944af-b644-44ea-9a01-3e340eabd34d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00370238s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-775062 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh -n functional-775062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 cp functional-775062:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3679182619/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh -n functional-775062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh -n functional-775062 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/589083/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "sudo cat /etc/test/nested/copy/589083/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/589083.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "sudo cat /etc/ssl/certs/589083.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/589083.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "sudo cat /usr/share/ca-certificates/589083.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/5890832.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "sudo cat /etc/ssl/certs/5890832.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/5890832.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "sudo cat /usr/share/ca-certificates/5890832.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-775062 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-775062 ssh "sudo systemctl is-active docker": exit status 1 (315.982823ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-775062 ssh "sudo systemctl is-active crio": exit status 1 (327.645556ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-775062 version -o=json --components: (1.328228781s)
--- PASS: TestFunctional/parallel/Version/components (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-775062 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-775062
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-775062
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-775062 image ls --format short --alsologtostderr:
I0927 00:37:26.639261  626334 out.go:345] Setting OutFile to fd 1 ...
I0927 00:37:26.639430  626334 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:37:26.639463  626334 out.go:358] Setting ErrFile to fd 2...
I0927 00:37:26.639483  626334 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:37:26.639761  626334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-583677/.minikube/bin
I0927 00:37:26.640501  626334 config.go:182] Loaded profile config "functional-775062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 00:37:26.640698  626334 config.go:182] Loaded profile config "functional-775062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 00:37:26.641282  626334 cli_runner.go:164] Run: docker container inspect functional-775062 --format={{.State.Status}}
I0927 00:37:26.659666  626334 ssh_runner.go:195] Run: systemctl --version
I0927 00:37:26.659722  626334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-775062
I0927 00:37:26.679111  626334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/functional-775062/id_rsa Username:docker}
I0927 00:37:26.770822  626334 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-775062 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| localhost/my-image                          | functional-775062  | sha256:531a4f | 831kB  |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| docker.io/kicbase/echo-server               | functional-775062  | sha256:ce2d2c | 2.17MB |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/minikube-local-cache-test | functional-775062  | sha256:3248c6 | 991B   |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-775062 image ls --format table --alsologtostderr:
I0927 00:37:31.046326  626840 out.go:345] Setting OutFile to fd 1 ...
I0927 00:37:31.046564  626840 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:37:31.046592  626840 out.go:358] Setting ErrFile to fd 2...
I0927 00:37:31.046613  626840 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:37:31.046898  626840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-583677/.minikube/bin
I0927 00:37:31.047616  626840 config.go:182] Loaded profile config "functional-775062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 00:37:31.047794  626840 config.go:182] Loaded profile config "functional-775062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 00:37:31.048527  626840 cli_runner.go:164] Run: docker container inspect functional-775062 --format={{.State.Status}}
I0927 00:37:31.066735  626840 ssh_runner.go:195] Run: systemctl --version
I0927 00:37:31.066788  626840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-775062
I0927 00:37:31.086871  626840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/functional-775062/id_rsa Username:docker}
I0927 00:37:31.187060  626840 ssh_runner.go:195] Run: sudo crictl images --output json
2024/09/27 00:37:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-775062 image ls --format json --alsologtostderr:
[{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-775062"],"size":"2173567"},{"id":"sha256:195245f0c79279e8b8e012ef
a02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:531a4fc64cc351ab2f7a184d5b15bbd843c0231f538207f3b6626ae4281f6a74","repoDigests":[],"repoTags":["localhost/my-image:functional-775062"],"size":"830617"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450
e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4
cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:3248c65f5470b2ab7bf08c56bbe410aa7429b8e22237b3ce5c377ab532cd5b6d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-775062"],"size":"991"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:3d18732f8686cc3c878055d99
a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-775062 image ls --format json --alsologtostderr:
I0927 00:37:30.783778  626807 out.go:345] Setting OutFile to fd 1 ...
I0927 00:37:30.783992  626807 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:37:30.783997  626807 out.go:358] Setting ErrFile to fd 2...
I0927 00:37:30.784002  626807 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:37:30.784258  626807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-583677/.minikube/bin
I0927 00:37:30.784974  626807 config.go:182] Loaded profile config "functional-775062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 00:37:30.785126  626807 config.go:182] Loaded profile config "functional-775062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 00:37:30.785670  626807 cli_runner.go:164] Run: docker container inspect functional-775062 --format={{.State.Status}}
I0927 00:37:30.804489  626807 ssh_runner.go:195] Run: systemctl --version
I0927 00:37:30.804542  626807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-775062
I0927 00:37:30.826190  626807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/functional-775062/id_rsa Username:docker}
I0927 00:37:30.915304  626807 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-775062 image ls --format yaml --alsologtostderr:
- id: sha256:3248c65f5470b2ab7bf08c56bbe410aa7429b8e22237b3ce5c377ab532cd5b6d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-775062
size: "991"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-775062
size: "2173567"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-775062 image ls --format yaml --alsologtostderr:
I0927 00:37:26.872793  626366 out.go:345] Setting OutFile to fd 1 ...
I0927 00:37:26.872987  626366 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:37:26.873017  626366 out.go:358] Setting ErrFile to fd 2...
I0927 00:37:26.873038  626366 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:37:26.873327  626366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-583677/.minikube/bin
I0927 00:37:26.874038  626366 config.go:182] Loaded profile config "functional-775062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 00:37:26.874222  626366 config.go:182] Loaded profile config "functional-775062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 00:37:26.874895  626366 cli_runner.go:164] Run: docker container inspect functional-775062 --format={{.State.Status}}
I0927 00:37:26.891362  626366 ssh_runner.go:195] Run: systemctl --version
I0927 00:37:26.891413  626366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-775062
I0927 00:37:26.908957  626366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/functional-775062/id_rsa Username:docker}
I0927 00:37:26.998880  626366 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-775062 ssh pgrep buildkitd: exit status 1 (259.058823ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image build -t localhost/my-image:functional-775062 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-775062 image build -t localhost/my-image:functional-775062 testdata/build --alsologtostderr: (3.123340504s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-775062 image build -t localhost/my-image:functional-775062 testdata/build --alsologtostderr:
I0927 00:37:27.360254  626455 out.go:345] Setting OutFile to fd 1 ...
I0927 00:37:27.360905  626455 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:37:27.360918  626455 out.go:358] Setting ErrFile to fd 2...
I0927 00:37:27.360924  626455 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:37:27.361159  626455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-583677/.minikube/bin
I0927 00:37:27.361787  626455 config.go:182] Loaded profile config "functional-775062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 00:37:27.363091  626455 config.go:182] Loaded profile config "functional-775062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 00:37:27.363766  626455 cli_runner.go:164] Run: docker container inspect functional-775062 --format={{.State.Status}}
I0927 00:37:27.381457  626455 ssh_runner.go:195] Run: systemctl --version
I0927 00:37:27.381510  626455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-775062
I0927 00:37:27.402654  626455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/functional-775062/id_rsa Username:docker}
I0927 00:37:27.499902  626455 build_images.go:161] Building image from path: /tmp/build.3457786054.tar
I0927 00:37:27.499981  626455 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0927 00:37:27.511967  626455 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3457786054.tar
I0927 00:37:27.522970  626455 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3457786054.tar: stat -c "%s %y" /var/lib/minikube/build/build.3457786054.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3457786054.tar': No such file or directory
I0927 00:37:27.523000  626455 ssh_runner.go:362] scp /tmp/build.3457786054.tar --> /var/lib/minikube/build/build.3457786054.tar (3072 bytes)
I0927 00:37:27.553091  626455 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3457786054
I0927 00:37:27.562587  626455 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3457786054 -xf /var/lib/minikube/build/build.3457786054.tar
I0927 00:37:27.573293  626455 containerd.go:394] Building image: /var/lib/minikube/build/build.3457786054
I0927 00:37:27.573372  626455 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3457786054 --local dockerfile=/var/lib/minikube/build/build.3457786054 --output type=image,name=localhost/my-image:functional-775062
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 DONE 0.5s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:a213e11b3b5390fc55a12420dd04cc3dd32538fe74235805631003f68bef06e2
#8 exporting manifest sha256:a213e11b3b5390fc55a12420dd04cc3dd32538fe74235805631003f68bef06e2 0.0s done
#8 exporting config sha256:531a4fc64cc351ab2f7a184d5b15bbd843c0231f538207f3b6626ae4281f6a74 0.0s done
#8 naming to localhost/my-image:functional-775062 done
#8 DONE 0.2s
I0927 00:37:30.411722  626455 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3457786054 --local dockerfile=/var/lib/minikube/build/build.3457786054 --output type=image,name=localhost/my-image:functional-775062: (2.838319532s)
I0927 00:37:30.411791  626455 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3457786054
I0927 00:37:30.425432  626455 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3457786054.tar
I0927 00:37:30.435084  626455 build_images.go:217] Built localhost/my-image:functional-775062 from /tmp/build.3457786054.tar
I0927 00:37:30.435112  626455 build_images.go:133] succeeded building to: functional-775062
I0927 00:37:30.435118  626455 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-775062
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image load --daemon kicbase/echo-server:functional-775062 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-775062 image load --daemon kicbase/echo-server:functional-775062 --alsologtostderr: (1.351463108s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image load --daemon kicbase/echo-server:functional-775062 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-775062 image load --daemon kicbase/echo-server:functional-775062 --alsologtostderr: (1.113004043s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-775062 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-775062 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-zszml" [5014f181-df07-4698-8a8c-46df5c2f869c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-zszml" [5014f181-df07-4698-8a8c-46df5c2f869c] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004299199s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-775062
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image load --daemon kicbase/echo-server:functional-775062 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-775062 image load --daemon kicbase/echo-server:functional-775062 --alsologtostderr: (1.047653966s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image save kicbase/echo-server:functional-775062 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image rm kicbase/echo-server:functional-775062 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-775062
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 image save --daemon kicbase/echo-server:functional-775062 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-775062
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-775062 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-775062 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-775062 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-775062 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 622679: os: process already finished
helpers_test.go:508: unable to kill pid 622556: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-775062 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-775062 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b49bc734-4ec0-44de-8018-620281c9456b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b49bc734-4ec0-44de-8018-620281c9456b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004362436s
I0927 00:37:01.623927  589083 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 service list -o json
functional_test.go:1494: Took "334.258768ms" to run "out/minikube-linux-arm64 -p functional-775062 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30585
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30585
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-775062 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.225.239 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-775062 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "361.21877ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "55.789462ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "336.530676ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "47.97709ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-775062 /tmp/TestFunctionalparallelMountCmdany-port779953216/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727397432760754372" to /tmp/TestFunctionalparallelMountCmdany-port779953216/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727397432760754372" to /tmp/TestFunctionalparallelMountCmdany-port779953216/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727397432760754372" to /tmp/TestFunctionalparallelMountCmdany-port779953216/001/test-1727397432760754372
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-775062 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (319.216373ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 00:37:13.080953  589083 retry.go:31] will retry after 573.91356ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 27 00:37 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 27 00:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 27 00:37 test-1727397432760754372
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh cat /mount-9p/test-1727397432760754372
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-775062 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [364fc9db-c1df-4ad7-9532-3d4226d2cbff] Pending
helpers_test.go:344: "busybox-mount" [364fc9db-c1df-4ad7-9532-3d4226d2cbff] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [364fc9db-c1df-4ad7-9532-3d4226d2cbff] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [364fc9db-c1df-4ad7-9532-3d4226d2cbff] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003709002s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-775062 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-775062 /tmp/TestFunctionalparallelMountCmdany-port779953216/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-775062 /tmp/TestFunctionalparallelMountCmdspecific-port539516295/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-775062 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (333.107839ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 00:37:20.925167  589083 retry.go:31] will retry after 616.469221ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-775062 /tmp/TestFunctionalparallelMountCmdspecific-port539516295/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-775062 ssh "sudo umount -f /mount-9p": exit status 1 (251.92832ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-775062 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-775062 /tmp/TestFunctionalparallelMountCmdspecific-port539516295/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-775062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2098862910/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-775062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2098862910/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-775062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2098862910/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-775062 ssh "findmnt -T" /mount1: exit status 1 (629.192351ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 00:37:23.178098  589083 retry.go:31] will retry after 561.579908ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-775062 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-775062 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-775062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2098862910/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-775062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2098862910/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-775062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2098862910/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.23s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-775062
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-775062
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-775062
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (116.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-239294 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0927 00:37:39.281831  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:39.288186  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:39.299602  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:39.320952  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:39.362301  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:39.443726  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:39.605162  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:39.926803  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:40.568817  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:41.850305  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:44.412410  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:49.534658  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:59.776274  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:38:20.257623  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:39:01.219025  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-239294 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m55.73216014s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (116.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (32.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-239294 -- rollout status deployment/busybox: (29.521825559s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- exec busybox-7dff88458-9tmct -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- exec busybox-7dff88458-lnl99 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- exec busybox-7dff88458-xxr94 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- exec busybox-7dff88458-9tmct -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- exec busybox-7dff88458-lnl99 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- exec busybox-7dff88458-xxr94 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- exec busybox-7dff88458-9tmct -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- exec busybox-7dff88458-lnl99 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- exec busybox-7dff88458-xxr94 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (32.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- exec busybox-7dff88458-9tmct -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- exec busybox-7dff88458-9tmct -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- exec busybox-7dff88458-lnl99 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- exec busybox-7dff88458-lnl99 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- exec busybox-7dff88458-xxr94 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-239294 -- exec busybox-7dff88458-xxr94 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-239294 -v=7 --alsologtostderr
E0927 00:40:23.140251  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-239294 -v=7 --alsologtostderr: (22.115457209s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-239294 status -v=7 --alsologtostderr: (1.0448202s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-239294 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.012553876s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp testdata/cp-test.txt ha-239294:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp ha-239294:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1865873637/001/cp-test_ha-239294.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp ha-239294:/home/docker/cp-test.txt ha-239294-m02:/home/docker/cp-test_ha-239294_ha-239294-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m02 "sudo cat /home/docker/cp-test_ha-239294_ha-239294-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp ha-239294:/home/docker/cp-test.txt ha-239294-m03:/home/docker/cp-test_ha-239294_ha-239294-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m03 "sudo cat /home/docker/cp-test_ha-239294_ha-239294-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp ha-239294:/home/docker/cp-test.txt ha-239294-m04:/home/docker/cp-test_ha-239294_ha-239294-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m04 "sudo cat /home/docker/cp-test_ha-239294_ha-239294-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp testdata/cp-test.txt ha-239294-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp ha-239294-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1865873637/001/cp-test_ha-239294-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp ha-239294-m02:/home/docker/cp-test.txt ha-239294:/home/docker/cp-test_ha-239294-m02_ha-239294.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294 "sudo cat /home/docker/cp-test_ha-239294-m02_ha-239294.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp ha-239294-m02:/home/docker/cp-test.txt ha-239294-m03:/home/docker/cp-test_ha-239294-m02_ha-239294-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m03 "sudo cat /home/docker/cp-test_ha-239294-m02_ha-239294-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp ha-239294-m02:/home/docker/cp-test.txt ha-239294-m04:/home/docker/cp-test_ha-239294-m02_ha-239294-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m04 "sudo cat /home/docker/cp-test_ha-239294-m02_ha-239294-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp testdata/cp-test.txt ha-239294-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp ha-239294-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1865873637/001/cp-test_ha-239294-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp ha-239294-m03:/home/docker/cp-test.txt ha-239294:/home/docker/cp-test_ha-239294-m03_ha-239294.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294 "sudo cat /home/docker/cp-test_ha-239294-m03_ha-239294.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp ha-239294-m03:/home/docker/cp-test.txt ha-239294-m02:/home/docker/cp-test_ha-239294-m03_ha-239294-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m02 "sudo cat /home/docker/cp-test_ha-239294-m03_ha-239294-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp ha-239294-m03:/home/docker/cp-test.txt ha-239294-m04:/home/docker/cp-test_ha-239294-m03_ha-239294-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m04 "sudo cat /home/docker/cp-test_ha-239294-m03_ha-239294-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp testdata/cp-test.txt ha-239294-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp ha-239294-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1865873637/001/cp-test_ha-239294-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp ha-239294-m04:/home/docker/cp-test.txt ha-239294:/home/docker/cp-test_ha-239294-m04_ha-239294.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294 "sudo cat /home/docker/cp-test_ha-239294-m04_ha-239294.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp ha-239294-m04:/home/docker/cp-test.txt ha-239294-m02:/home/docker/cp-test_ha-239294-m04_ha-239294-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m02 "sudo cat /home/docker/cp-test_ha-239294-m04_ha-239294-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 cp ha-239294-m04:/home/docker/cp-test.txt ha-239294-m03:/home/docker/cp-test_ha-239294-m04_ha-239294-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 ssh -n ha-239294-m03 "sudo cat /home/docker/cp-test_ha-239294-m04_ha-239294-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-239294 node stop m02 -v=7 --alsologtostderr: (12.132497936s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-239294 status -v=7 --alsologtostderr: exit status 7 (755.037885ms)

                                                
                                                
-- stdout --
	ha-239294
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-239294-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-239294-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-239294-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:41:02.644233  642893 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:41:02.644348  642893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:41:02.644353  642893 out.go:358] Setting ErrFile to fd 2...
	I0927 00:41:02.644358  642893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:41:02.644587  642893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-583677/.minikube/bin
	I0927 00:41:02.644815  642893 out.go:352] Setting JSON to false
	I0927 00:41:02.644879  642893 mustload.go:65] Loading cluster: ha-239294
	I0927 00:41:02.644916  642893 notify.go:220] Checking for updates...
	I0927 00:41:02.645430  642893 config.go:182] Loaded profile config "ha-239294": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 00:41:02.645448  642893 status.go:174] checking status of ha-239294 ...
	I0927 00:41:02.646168  642893 cli_runner.go:164] Run: docker container inspect ha-239294 --format={{.State.Status}}
	I0927 00:41:02.673453  642893 status.go:364] ha-239294 host status = "Running" (err=<nil>)
	I0927 00:41:02.673514  642893 host.go:66] Checking if "ha-239294" exists ...
	I0927 00:41:02.674111  642893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-239294
	I0927 00:41:02.703009  642893 host.go:66] Checking if "ha-239294" exists ...
	I0927 00:41:02.703330  642893 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:41:02.703379  642893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-239294
	I0927 00:41:02.721346  642893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/ha-239294/id_rsa Username:docker}
	I0927 00:41:02.812029  642893 ssh_runner.go:195] Run: systemctl --version
	I0927 00:41:02.816547  642893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:41:02.828926  642893 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:41:02.897622  642893 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-27 00:41:02.886227027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:41:02.898258  642893 kubeconfig.go:125] found "ha-239294" server: "https://192.168.49.254:8443"
	I0927 00:41:02.898283  642893 api_server.go:166] Checking apiserver status ...
	I0927 00:41:02.898324  642893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:41:02.910616  642893 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup
	I0927 00:41:02.920387  642893 api_server.go:182] apiserver freezer: "5:freezer:/docker/eea371bf3cdc4dc78152280dd6f42f9c466dbdcc83a01934d2cb143c18b80417/kubepods/burstable/poda283fcde3466d36f7095e74bc6666fe4/38047496394377c35d211aa39a0b3d4a1d3516b29a1cdb41c57aa8a08a40030f"
	I0927 00:41:02.920460  642893 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/eea371bf3cdc4dc78152280dd6f42f9c466dbdcc83a01934d2cb143c18b80417/kubepods/burstable/poda283fcde3466d36f7095e74bc6666fe4/38047496394377c35d211aa39a0b3d4a1d3516b29a1cdb41c57aa8a08a40030f/freezer.state
	I0927 00:41:02.929725  642893 api_server.go:204] freezer state: "THAWED"
	I0927 00:41:02.929756  642893 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0927 00:41:02.939608  642893 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0927 00:41:02.939642  642893 status.go:456] ha-239294 apiserver status = Running (err=<nil>)
	I0927 00:41:02.939653  642893 status.go:176] ha-239294 status: &{Name:ha-239294 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:41:02.939670  642893 status.go:174] checking status of ha-239294-m02 ...
	I0927 00:41:02.940003  642893 cli_runner.go:164] Run: docker container inspect ha-239294-m02 --format={{.State.Status}}
	I0927 00:41:02.963982  642893 status.go:364] ha-239294-m02 host status = "Stopped" (err=<nil>)
	I0927 00:41:02.964001  642893 status.go:377] host is not running, skipping remaining checks
	I0927 00:41:02.964022  642893 status.go:176] ha-239294-m02 status: &{Name:ha-239294-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:41:02.964043  642893 status.go:174] checking status of ha-239294-m03 ...
	I0927 00:41:02.964354  642893 cli_runner.go:164] Run: docker container inspect ha-239294-m03 --format={{.State.Status}}
	I0927 00:41:02.980586  642893 status.go:364] ha-239294-m03 host status = "Running" (err=<nil>)
	I0927 00:41:02.980608  642893 host.go:66] Checking if "ha-239294-m03" exists ...
	I0927 00:41:02.980909  642893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-239294-m03
	I0927 00:41:03.000651  642893 host.go:66] Checking if "ha-239294-m03" exists ...
	I0927 00:41:03.003256  642893 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:41:03.003315  642893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-239294-m03
	I0927 00:41:03.033069  642893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/ha-239294-m03/id_rsa Username:docker}
	I0927 00:41:03.132346  642893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:41:03.144811  642893 kubeconfig.go:125] found "ha-239294" server: "https://192.168.49.254:8443"
	I0927 00:41:03.144844  642893 api_server.go:166] Checking apiserver status ...
	I0927 00:41:03.144888  642893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:41:03.156268  642893 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1350/cgroup
	I0927 00:41:03.173042  642893 api_server.go:182] apiserver freezer: "5:freezer:/docker/8351c0d99f5fcdeaafb57b67f9f3aea65bc2b552688e7d44c0e68aa33eeda844/kubepods/burstable/pod5ae2eaec99cc41c8031630f1c85dd2f1/8fe47449f00fcb318766dea34f75ff95f4bfeceb53feba26cdeab9beb9970c4b"
	I0927 00:41:03.173128  642893 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8351c0d99f5fcdeaafb57b67f9f3aea65bc2b552688e7d44c0e68aa33eeda844/kubepods/burstable/pod5ae2eaec99cc41c8031630f1c85dd2f1/8fe47449f00fcb318766dea34f75ff95f4bfeceb53feba26cdeab9beb9970c4b/freezer.state
	I0927 00:41:03.182852  642893 api_server.go:204] freezer state: "THAWED"
	I0927 00:41:03.182898  642893 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0927 00:41:03.190814  642893 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0927 00:41:03.190843  642893 status.go:456] ha-239294-m03 apiserver status = Running (err=<nil>)
	I0927 00:41:03.190853  642893 status.go:176] ha-239294-m03 status: &{Name:ha-239294-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:41:03.190871  642893 status.go:174] checking status of ha-239294-m04 ...
	I0927 00:41:03.191187  642893 cli_runner.go:164] Run: docker container inspect ha-239294-m04 --format={{.State.Status}}
	I0927 00:41:03.209684  642893 status.go:364] ha-239294-m04 host status = "Running" (err=<nil>)
	I0927 00:41:03.209709  642893 host.go:66] Checking if "ha-239294-m04" exists ...
	I0927 00:41:03.210031  642893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-239294-m04
	I0927 00:41:03.228472  642893 host.go:66] Checking if "ha-239294-m04" exists ...
	I0927 00:41:03.228771  642893 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:41:03.228809  642893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-239294-m04
	I0927 00:41:03.246375  642893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/ha-239294-m04/id_rsa Username:docker}
	I0927 00:41:03.335439  642893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:41:03.348436  642893 status.go:176] ha-239294-m04 status: &{Name:ha-239294-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-239294 node start m02 -v=7 --alsologtostderr: (18.081454569s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.086372055s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (131.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-239294 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-239294 -v=7 --alsologtostderr
E0927 00:41:47.174864  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:41:47.181196  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:41:47.192560  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:41:47.213872  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:41:47.255240  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:41:47.336516  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:41:47.497976  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:41:47.819624  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:41:48.461608  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:41:49.743508  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:41:52.304866  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:41:57.426883  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-239294 -v=7 --alsologtostderr: (37.269909851s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-239294 --wait=true -v=7 --alsologtostderr
E0927 00:42:07.668833  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:42:28.150969  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:42:39.280486  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:43:06.981914  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:43:09.112809  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-239294 --wait=true -v=7 --alsologtostderr: (1m34.236872019s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-239294
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (131.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-239294 node delete m03 -v=7 --alsologtostderr: (9.71186693s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-239294 stop -v=7 --alsologtostderr: (35.945566253s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-239294 status -v=7 --alsologtostderr: exit status 7 (107.159787ms)

                                                
                                                
-- stdout --
	ha-239294
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-239294-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-239294-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:44:23.364563  657220 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:44:23.364679  657220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:44:23.364689  657220 out.go:358] Setting ErrFile to fd 2...
	I0927 00:44:23.364694  657220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:44:23.364953  657220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-583677/.minikube/bin
	I0927 00:44:23.365126  657220 out.go:352] Setting JSON to false
	I0927 00:44:23.365153  657220 mustload.go:65] Loading cluster: ha-239294
	I0927 00:44:23.365248  657220 notify.go:220] Checking for updates...
	I0927 00:44:23.365574  657220 config.go:182] Loaded profile config "ha-239294": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 00:44:23.365587  657220 status.go:174] checking status of ha-239294 ...
	I0927 00:44:23.366113  657220 cli_runner.go:164] Run: docker container inspect ha-239294 --format={{.State.Status}}
	I0927 00:44:23.385132  657220 status.go:364] ha-239294 host status = "Stopped" (err=<nil>)
	I0927 00:44:23.385152  657220 status.go:377] host is not running, skipping remaining checks
	I0927 00:44:23.385159  657220 status.go:176] ha-239294 status: &{Name:ha-239294 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:44:23.385188  657220 status.go:174] checking status of ha-239294-m02 ...
	I0927 00:44:23.385501  657220 cli_runner.go:164] Run: docker container inspect ha-239294-m02 --format={{.State.Status}}
	I0927 00:44:23.404286  657220 status.go:364] ha-239294-m02 host status = "Stopped" (err=<nil>)
	I0927 00:44:23.404314  657220 status.go:377] host is not running, skipping remaining checks
	I0927 00:44:23.404322  657220 status.go:176] ha-239294-m02 status: &{Name:ha-239294-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:44:23.404343  657220 status.go:174] checking status of ha-239294-m04 ...
	I0927 00:44:23.404643  657220 cli_runner.go:164] Run: docker container inspect ha-239294-m04 --format={{.State.Status}}
	I0927 00:44:23.428671  657220 status.go:364] ha-239294-m04 host status = "Stopped" (err=<nil>)
	I0927 00:44:23.428693  657220 status.go:377] host is not running, skipping remaining checks
	I0927 00:44:23.428700  657220 status.go:176] ha-239294-m04 status: &{Name:ha-239294-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (78.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-239294 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0927 00:44:31.034970  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-239294 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m17.664208194s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (78.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-239294 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-239294 --control-plane -v=7 --alsologtostderr: (44.591377805s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-239294 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                    
x
+
TestJSONOutput/start/Command (85.36s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-280100 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0927 00:46:47.173946  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:47:14.879068  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:47:39.280633  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-280100 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m25.361002024s)
--- PASS: TestJSONOutput/start/Command (85.36s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-280100 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-280100 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-280100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-280100 --output=json --user=testUser: (5.757847588s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-643514 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-643514 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.585615ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d352fdfa-3ef0-4146-aff1-502222573f96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-643514] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"790d3c9a-ff1d-4158-a96d-4624b7157976","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19711"}}
	{"specversion":"1.0","id":"40f3ce9c-305b-48d2-9c5d-5afa20afe557","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d42356ab-5c52-4069-8905-306777f6570c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19711-583677/kubeconfig"}}
	{"specversion":"1.0","id":"f6faa97c-b17a-4a64-aba2-7a4503b80068","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-583677/.minikube"}}
	{"specversion":"1.0","id":"6f2e981d-0b81-499b-b437-f5c7d62196ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"0b8f1a55-c553-40b3-b6d7-6446ff722f9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e62f6f34-b6e9-406e-aca8-20ee78d55d7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-643514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-643514
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-877601 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-877601 --network=: (39.234925755s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-877601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-877601
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-877601: (2.039720759s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.29s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.74s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-530239 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-530239 --network=bridge: (31.797276952s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-530239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-530239
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-530239: (1.920471967s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.74s)

                                                
                                    
x
+
TestKicExistingNetwork (32.18s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0927 00:49:28.972016  589083 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0927 00:49:28.987976  589083 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0927 00:49:28.988042  589083 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0927 00:49:28.988060  589083 cli_runner.go:164] Run: docker network inspect existing-network
W0927 00:49:29.005442  589083 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0927 00:49:29.005479  589083 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0927 00:49:29.005500  589083 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0927 00:49:29.005640  589083 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0927 00:49:29.025128  589083 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a4bb9986a25f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ae:f0:45:75} reservation:<nil>}
I0927 00:49:29.025487  589083 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a84fb0}
I0927 00:49:29.025514  589083 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0927 00:49:29.025570  589083 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0927 00:49:29.095201  589083 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-222388 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-222388 --network=existing-network: (29.643816439s)
helpers_test.go:175: Cleaning up "existing-network-222388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-222388
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-222388: (2.381404723s)
I0927 00:50:01.137125  589083 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.18s)

                                                
                                    
x
+
TestKicCustomSubnet (32.68s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-214019 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-214019 --subnet=192.168.60.0/24: (30.591782103s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-214019 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-214019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-214019
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-214019: (2.067759367s)
--- PASS: TestKicCustomSubnet (32.68s)

                                                
                                    
x
+
TestKicStaticIP (36.15s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-271308 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-271308 --static-ip=192.168.200.200: (33.933059653s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-271308 ip
helpers_test.go:175: Cleaning up "static-ip-271308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-271308
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-271308: (2.088980707s)
--- PASS: TestKicStaticIP (36.15s)

                                                
                                    
x
+
TestMainNoArgs (0.1s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.10s)

                                                
                                    
x
+
TestMinikubeProfile (64.35s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-232578 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-232578 --driver=docker  --container-runtime=containerd: (28.81989358s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-236205 --driver=docker  --container-runtime=containerd
E0927 00:51:47.174655  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-236205 --driver=docker  --container-runtime=containerd: (29.956379106s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-232578
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-236205
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-236205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-236205
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-236205: (1.99526069s)
helpers_test.go:175: Cleaning up "first-232578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-232578
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-232578: (2.176609721s)
--- PASS: TestMinikubeProfile (64.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-379026 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-379026 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.208793138s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-379026 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-380971 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-380971 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.915645546s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-380971 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-379026 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-379026 --alsologtostderr -v=5: (1.607753639s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-380971 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-380971
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-380971: (1.221077511s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-380971
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-380971: (6.569301925s)
--- PASS: TestMountStart/serial/RestartStopped (7.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-380971 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-043503 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-043503 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m3.87136805s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (19.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-043503 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-043503 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-043503 -- rollout status deployment/busybox: (17.162224104s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-043503 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-043503 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-043503 -- exec busybox-7dff88458-5m9bc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-043503 -- exec busybox-7dff88458-f2rsp -- nslookup kubernetes.io
E0927 00:54:02.343815  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-043503 -- exec busybox-7dff88458-5m9bc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-043503 -- exec busybox-7dff88458-f2rsp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-043503 -- exec busybox-7dff88458-5m9bc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-043503 -- exec busybox-7dff88458-f2rsp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (19.10s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-043503 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-043503 -- exec busybox-7dff88458-5m9bc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-043503 -- exec busybox-7dff88458-5m9bc -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-043503 -- exec busybox-7dff88458-f2rsp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-043503 -- exec busybox-7dff88458-f2rsp -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-043503 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-043503 -v 3 --alsologtostderr: (16.062211306s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.71s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-043503 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 cp testdata/cp-test.txt multinode-043503:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 cp multinode-043503:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4003604216/001/cp-test_multinode-043503.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 cp multinode-043503:/home/docker/cp-test.txt multinode-043503-m02:/home/docker/cp-test_multinode-043503_multinode-043503-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503-m02 "sudo cat /home/docker/cp-test_multinode-043503_multinode-043503-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 cp multinode-043503:/home/docker/cp-test.txt multinode-043503-m03:/home/docker/cp-test_multinode-043503_multinode-043503-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503-m03 "sudo cat /home/docker/cp-test_multinode-043503_multinode-043503-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 cp testdata/cp-test.txt multinode-043503-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 cp multinode-043503-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4003604216/001/cp-test_multinode-043503-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 cp multinode-043503-m02:/home/docker/cp-test.txt multinode-043503:/home/docker/cp-test_multinode-043503-m02_multinode-043503.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503 "sudo cat /home/docker/cp-test_multinode-043503-m02_multinode-043503.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 cp multinode-043503-m02:/home/docker/cp-test.txt multinode-043503-m03:/home/docker/cp-test_multinode-043503-m02_multinode-043503-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503-m03 "sudo cat /home/docker/cp-test_multinode-043503-m02_multinode-043503-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 cp testdata/cp-test.txt multinode-043503-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 cp multinode-043503-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4003604216/001/cp-test_multinode-043503-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 cp multinode-043503-m03:/home/docker/cp-test.txt multinode-043503:/home/docker/cp-test_multinode-043503-m03_multinode-043503.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503 "sudo cat /home/docker/cp-test_multinode-043503-m03_multinode-043503.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 cp multinode-043503-m03:/home/docker/cp-test.txt multinode-043503-m02:/home/docker/cp-test_multinode-043503-m03_multinode-043503-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 ssh -n multinode-043503-m02 "sudo cat /home/docker/cp-test_multinode-043503-m03_multinode-043503-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-043503 node stop m03: (1.240441834s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-043503 status: exit status 7 (517.105853ms)

                                                
                                                
-- stdout --
	multinode-043503
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-043503-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-043503-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-043503 status --alsologtostderr: exit status 7 (528.036391ms)

                                                
                                                
-- stdout --
	multinode-043503
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-043503-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-043503-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:54:33.732869  711134 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:54:33.733091  711134 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:54:33.733118  711134 out.go:358] Setting ErrFile to fd 2...
	I0927 00:54:33.733137  711134 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:54:33.733408  711134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-583677/.minikube/bin
	I0927 00:54:33.733631  711134 out.go:352] Setting JSON to false
	I0927 00:54:33.733683  711134 mustload.go:65] Loading cluster: multinode-043503
	I0927 00:54:33.733711  711134 notify.go:220] Checking for updates...
	I0927 00:54:33.734147  711134 config.go:182] Loaded profile config "multinode-043503": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 00:54:33.734204  711134 status.go:174] checking status of multinode-043503 ...
	I0927 00:54:33.734906  711134 cli_runner.go:164] Run: docker container inspect multinode-043503 --format={{.State.Status}}
	I0927 00:54:33.755498  711134 status.go:364] multinode-043503 host status = "Running" (err=<nil>)
	I0927 00:54:33.755531  711134 host.go:66] Checking if "multinode-043503" exists ...
	I0927 00:54:33.755865  711134 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-043503
	I0927 00:54:33.780910  711134 host.go:66] Checking if "multinode-043503" exists ...
	I0927 00:54:33.781210  711134 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:54:33.781254  711134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-043503
	I0927 00:54:33.799084  711134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33649 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/multinode-043503/id_rsa Username:docker}
	I0927 00:54:33.891843  711134 ssh_runner.go:195] Run: systemctl --version
	I0927 00:54:33.896270  711134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:54:33.908620  711134 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:54:33.959891  711134 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-27 00:54:33.948821765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:54:33.960483  711134 kubeconfig.go:125] found "multinode-043503" server: "https://192.168.67.2:8443"
	I0927 00:54:33.960516  711134 api_server.go:166] Checking apiserver status ...
	I0927 00:54:33.960564  711134 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:54:33.971835  711134 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1444/cgroup
	I0927 00:54:33.982219  711134 api_server.go:182] apiserver freezer: "5:freezer:/docker/1c7e23dc9ad4cdc0a1d4f990bab3a8806d6e5c59e98b68cd4a28e5a9be7ca764/kubepods/burstable/podf6aa167a9419551f447acba23870c129/263bc9dac75dc6fd3fb96085cb8830d8e9170dffee1b8f99794732edcc85aaaa"
	I0927 00:54:33.982295  711134 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1c7e23dc9ad4cdc0a1d4f990bab3a8806d6e5c59e98b68cd4a28e5a9be7ca764/kubepods/burstable/podf6aa167a9419551f447acba23870c129/263bc9dac75dc6fd3fb96085cb8830d8e9170dffee1b8f99794732edcc85aaaa/freezer.state
	I0927 00:54:33.991640  711134 api_server.go:204] freezer state: "THAWED"
	I0927 00:54:33.991670  711134 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0927 00:54:33.999573  711134 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0927 00:54:33.999603  711134 status.go:456] multinode-043503 apiserver status = Running (err=<nil>)
	I0927 00:54:33.999614  711134 status.go:176] multinode-043503 status: &{Name:multinode-043503 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:54:33.999638  711134 status.go:174] checking status of multinode-043503-m02 ...
	I0927 00:54:33.999975  711134 cli_runner.go:164] Run: docker container inspect multinode-043503-m02 --format={{.State.Status}}
	I0927 00:54:34.024450  711134 status.go:364] multinode-043503-m02 host status = "Running" (err=<nil>)
	I0927 00:54:34.024475  711134 host.go:66] Checking if "multinode-043503-m02" exists ...
	I0927 00:54:34.024968  711134 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-043503-m02
	I0927 00:54:34.044822  711134 host.go:66] Checking if "multinode-043503-m02" exists ...
	I0927 00:54:34.045205  711134 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:54:34.045262  711134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-043503-m02
	I0927 00:54:34.064719  711134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33654 SSHKeyPath:/home/jenkins/minikube-integration/19711-583677/.minikube/machines/multinode-043503-m02/id_rsa Username:docker}
	I0927 00:54:34.163477  711134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:54:34.178799  711134 status.go:176] multinode-043503-m02 status: &{Name:multinode-043503-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:54:34.178834  711134 status.go:174] checking status of multinode-043503-m03 ...
	I0927 00:54:34.179145  711134 cli_runner.go:164] Run: docker container inspect multinode-043503-m03 --format={{.State.Status}}
	I0927 00:54:34.197659  711134 status.go:364] multinode-043503-m03 host status = "Stopped" (err=<nil>)
	I0927 00:54:34.197683  711134 status.go:377] host is not running, skipping remaining checks
	I0927 00:54:34.197691  711134 status.go:176] multinode-043503-m03 status: &{Name:multinode-043503-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-043503 node start m03 -v=7 --alsologtostderr: (8.921554889s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (98.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-043503
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-043503
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-043503: (25.226673564s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-043503 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-043503 --wait=true -v=8 --alsologtostderr: (1m13.305826161s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-043503
--- PASS: TestMultiNode/serial/RestartKeepsNodes (98.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-043503 node delete m03: (4.829226843s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 stop
E0927 00:56:47.174819  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-043503 stop: (23.825871781s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-043503 status: exit status 7 (97.482136ms)

                                                
                                                
-- stdout --
	multinode-043503
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-043503-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-043503 status --alsologtostderr: exit status 7 (96.875967ms)

                                                
                                                
-- stdout --
	multinode-043503
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-043503-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:56:51.989327  720091 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:56:51.989474  720091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:56:51.989485  720091 out.go:358] Setting ErrFile to fd 2...
	I0927 00:56:51.989490  720091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:56:51.989742  720091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-583677/.minikube/bin
	I0927 00:56:51.989922  720091 out.go:352] Setting JSON to false
	I0927 00:56:51.989966  720091 mustload.go:65] Loading cluster: multinode-043503
	I0927 00:56:51.990014  720091 notify.go:220] Checking for updates...
	I0927 00:56:51.990382  720091 config.go:182] Loaded profile config "multinode-043503": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 00:56:51.990402  720091 status.go:174] checking status of multinode-043503 ...
	I0927 00:56:51.991294  720091 cli_runner.go:164] Run: docker container inspect multinode-043503 --format={{.State.Status}}
	I0927 00:56:52.011056  720091 status.go:364] multinode-043503 host status = "Stopped" (err=<nil>)
	I0927 00:56:52.011082  720091 status.go:377] host is not running, skipping remaining checks
	I0927 00:56:52.011091  720091 status.go:176] multinode-043503 status: &{Name:multinode-043503 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:56:52.011119  720091 status.go:174] checking status of multinode-043503-m02 ...
	I0927 00:56:52.011486  720091 cli_runner.go:164] Run: docker container inspect multinode-043503-m02 --format={{.State.Status}}
	I0927 00:56:52.041183  720091 status.go:364] multinode-043503-m02 host status = "Stopped" (err=<nil>)
	I0927 00:56:52.041206  720091 status.go:377] host is not running, skipping remaining checks
	I0927 00:56:52.041213  720091 status.go:176] multinode-043503-m02 status: &{Name:multinode-043503-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-043503 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0927 00:57:39.280916  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-043503 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.928326052s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-043503 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.67s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-043503
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-043503-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-043503-m02 --driver=docker  --container-runtime=containerd: exit status 14 (98.855897ms)

                                                
                                                
-- stdout --
	* [multinode-043503-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-583677/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-583677/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-043503-m02' is duplicated with machine name 'multinode-043503-m02' in profile 'multinode-043503'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-043503-m03 --driver=docker  --container-runtime=containerd
E0927 00:58:10.241125  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-043503-m03 --driver=docker  --container-runtime=containerd: (31.731421473s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-043503
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-043503: exit status 80 (307.807883ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-043503 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-043503-m03 already exists in multinode-043503-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-043503-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-043503-m03: (1.983484163s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.17s)

                                                
                                    
x
+
TestPreload (120.8s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-808952 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-808952 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m21.848775477s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-808952 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-808952 image pull gcr.io/k8s-minikube/busybox: (1.936168987s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-808952
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-808952: (12.056048534s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-808952 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-808952 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (22.106024479s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-808952 image list
helpers_test.go:175: Cleaning up "test-preload-808952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-808952
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-808952: (2.430121477s)
--- PASS: TestPreload (120.80s)

                                                
                                    
x
+
TestScheduledStopUnix (106.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-621708 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-621708 --memory=2048 --driver=docker  --container-runtime=containerd: (30.635313607s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-621708 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-621708 -n scheduled-stop-621708
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-621708 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0927 01:00:52.785786  589083 retry.go:31] will retry after 149.051µs: open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/scheduled-stop-621708/pid: no such file or directory
I0927 01:00:52.786274  589083 retry.go:31] will retry after 103.404µs: open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/scheduled-stop-621708/pid: no such file or directory
I0927 01:00:52.787394  589083 retry.go:31] will retry after 267.743µs: open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/scheduled-stop-621708/pid: no such file or directory
I0927 01:00:52.788511  589083 retry.go:31] will retry after 454.378µs: open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/scheduled-stop-621708/pid: no such file or directory
I0927 01:00:52.789626  589083 retry.go:31] will retry after 395.248µs: open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/scheduled-stop-621708/pid: no such file or directory
I0927 01:00:52.790740  589083 retry.go:31] will retry after 955.376µs: open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/scheduled-stop-621708/pid: no such file or directory
I0927 01:00:52.791820  589083 retry.go:31] will retry after 709.634µs: open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/scheduled-stop-621708/pid: no such file or directory
I0927 01:00:52.792905  589083 retry.go:31] will retry after 1.896403ms: open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/scheduled-stop-621708/pid: no such file or directory
I0927 01:00:52.795406  589083 retry.go:31] will retry after 2.289044ms: open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/scheduled-stop-621708/pid: no such file or directory
I0927 01:00:52.798622  589083 retry.go:31] will retry after 4.716449ms: open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/scheduled-stop-621708/pid: no such file or directory
I0927 01:00:52.803848  589083 retry.go:31] will retry after 5.238857ms: open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/scheduled-stop-621708/pid: no such file or directory
I0927 01:00:52.809350  589083 retry.go:31] will retry after 10.454938ms: open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/scheduled-stop-621708/pid: no such file or directory
I0927 01:00:52.820573  589083 retry.go:31] will retry after 14.613194ms: open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/scheduled-stop-621708/pid: no such file or directory
I0927 01:00:52.837458  589083 retry.go:31] will retry after 18.360353ms: open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/scheduled-stop-621708/pid: no such file or directory
I0927 01:00:52.856714  589083 retry.go:31] will retry after 17.876243ms: open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/scheduled-stop-621708/pid: no such file or directory
I0927 01:00:52.874899  589083 retry.go:31] will retry after 64.043822ms: open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/scheduled-stop-621708/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-621708 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-621708 -n scheduled-stop-621708
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-621708
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-621708 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0927 01:01:47.174235  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-621708
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-621708: exit status 7 (86.22923ms)

                                                
                                                
-- stdout --
	scheduled-stop-621708
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-621708 -n scheduled-stop-621708
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-621708 -n scheduled-stop-621708: exit status 7 (65.226353ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-621708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-621708
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-621708: (4.763685405s)
--- PASS: TestScheduledStopUnix (106.93s)

                                                
                                    
x
+
TestInsufficientStorage (10.22s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-268462 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-268462 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.802259758s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6ecf8b4c-83e3-4a01-8115-6624ba508566","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-268462] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b23bdc09-2d4a-42e2-a8ba-00a578e4478e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19711"}}
	{"specversion":"1.0","id":"772f0536-31b6-459e-8440-195c0340f135","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1bc71985-69f1-48b5-83be-5ca93b6f1e18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19711-583677/kubeconfig"}}
	{"specversion":"1.0","id":"f3ab6f4a-d864-4408-bcbd-a92f9684e803","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-583677/.minikube"}}
	{"specversion":"1.0","id":"7df00ae9-a7ce-410d-8ad4-50d52d7a5fa5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"11c7c9f4-38df-4e7c-822a-21615b96638e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5164ddd2-d216-4f16-b3ab-c4f742e5b23e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0a33d010-fc98-4cc9-9809-52392c6029bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d71b5178-3029-4987-8cca-de9cc38c3828","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"060af068-832e-4a58-bd55-689f52b1f5c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"deb99b2d-8c05-45eb-975e-e764d71225eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-268462\" primary control-plane node in \"insufficient-storage-268462\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7701f411-1346-41c4-9970-cd7991ea2f41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727108449-19696 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b61ee30d-2b1a-40e7-b429-32dced0b59c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"00f877fe-d707-4371-9fbe-f1b621b4f0cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-268462 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-268462 --output=json --layout=cluster: exit status 7 (286.858585ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-268462","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-268462","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:02:16.685064  738742 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-268462" does not appear in /home/jenkins/minikube-integration/19711-583677/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-268462 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-268462 --output=json --layout=cluster: exit status 7 (281.942496ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-268462","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-268462","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:02:16.968994  738804 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-268462" does not appear in /home/jenkins/minikube-integration/19711-583677/kubeconfig
	E0927 01:02:16.980240  738804 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/insufficient-storage-268462/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-268462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-268462
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-268462: (1.84682795s)
--- PASS: TestInsufficientStorage (10.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.61s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2912214704 start -p running-upgrade-679750 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0927 01:07:39.280443  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2912214704 start -p running-upgrade-679750 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (42.510419849s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-679750 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-679750 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.671421802s)
helpers_test.go:175: Cleaning up "running-upgrade-679750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-679750
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-679750: (2.580114877s)
--- PASS: TestRunningBinaryUpgrade (82.61s)

                                                
                                    
x
+
TestKubernetesUpgrade (350.59s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-157618 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-157618 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (56.563310255s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-157618
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-157618: (1.236401675s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-157618 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-157618 status --format={{.Host}}: exit status 7 (66.827715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-157618 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-157618 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m40.702171137s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-157618 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-157618 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-157618 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (97.229751ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-157618] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-583677/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-583677/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-157618
	    minikube start -p kubernetes-upgrade-157618 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1576182 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-157618 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-157618 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-157618 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (9.334537222s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-157618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-157618
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-157618: (2.48607187s)
--- PASS: TestKubernetesUpgrade (350.59s)

                                                
                                    
x
+
TestMissingContainerUpgrade (193.84s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.664175647 start -p missing-upgrade-637244 --memory=2200 --driver=docker  --container-runtime=containerd
E0927 01:02:39.280273  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.664175647 start -p missing-upgrade-637244 --memory=2200 --driver=docker  --container-runtime=containerd: (1m36.76897982s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-637244
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-637244: (10.300897871s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-637244
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-637244 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-637244 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m23.502394328s)
helpers_test.go:175: Cleaning up "missing-upgrade-637244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-637244
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-637244: (2.310443619s)
--- PASS: TestMissingContainerUpgrade (193.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-043462 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-043462 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (86.529134ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-043462] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-583677/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-583677/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-043462 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-043462 --driver=docker  --container-runtime=containerd: (40.620045307s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-043462 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-043462 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-043462 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.425797915s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-043462 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-043462 status -o json: exit status 2 (297.393822ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-043462","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-043462
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-043462: (2.106727859s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-043462 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-043462 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.62966342s)
--- PASS: TestNoKubernetes/serial/Start (9.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-043462 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-043462 "sudo systemctl is-active --quiet service kubelet": exit status 1 (248.098787ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-043462
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-043462: (1.229027976s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-043462 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-043462 --driver=docker  --container-runtime=containerd: (6.581400181s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-043462 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-043462 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.595714ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (103.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1258233992 start -p stopped-upgrade-309570 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1258233992 start -p stopped-upgrade-309570 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.298000142s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1258233992 -p stopped-upgrade-309570 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1258233992 -p stopped-upgrade-309570 stop: (20.02145002s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-309570 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0927 01:06:47.174525  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-309570 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.697309807s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (103.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-309570
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-309570: (1.048258049s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                    
x
+
TestPause/serial/Start (99.55s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-319676 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-319676 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m39.547905925s)
--- PASS: TestPause/serial/Start (99.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-945654 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-945654 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (165.79933ms)

                                                
                                                
-- stdout --
	* [false-945654] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-583677/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-583677/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 01:09:56.135260  778632 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:09:56.135447  778632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:09:56.135458  778632 out.go:358] Setting ErrFile to fd 2...
	I0927 01:09:56.135463  778632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:09:56.135709  778632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-583677/.minikube/bin
	I0927 01:09:56.136236  778632 out.go:352] Setting JSON to false
	I0927 01:09:56.137287  778632 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17531,"bootTime":1727381865,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0927 01:09:56.137366  778632 start.go:139] virtualization:  
	I0927 01:09:56.139867  778632 out.go:177] * [false-945654] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 01:09:56.142089  778632 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 01:09:56.142208  778632 notify.go:220] Checking for updates...
	I0927 01:09:56.145969  778632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 01:09:56.148022  778632 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-583677/kubeconfig
	I0927 01:09:56.149809  778632 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-583677/.minikube
	I0927 01:09:56.151551  778632 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 01:09:56.153807  778632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 01:09:56.155990  778632 config.go:182] Loaded profile config "pause-319676": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 01:09:56.156099  778632 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 01:09:56.177732  778632 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 01:09:56.177855  778632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 01:09:56.240166  778632 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 01:09:56.230405143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 01:09:56.240282  778632 docker.go:318] overlay module found
	I0927 01:09:56.243272  778632 out.go:177] * Using the docker driver based on user configuration
	I0927 01:09:56.245412  778632 start.go:297] selected driver: docker
	I0927 01:09:56.245427  778632 start.go:901] validating driver "docker" against <nil>
	I0927 01:09:56.245441  778632 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 01:09:56.247949  778632 out.go:201] 
	W0927 01:09:56.249692  778632 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0927 01:09:56.251529  778632 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-945654 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-945654

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-945654

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-945654

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-945654

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-945654

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-945654

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-945654

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-945654

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-945654

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-945654

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-945654

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-945654" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-945654" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19711-583677/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:09:25 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-319676
contexts:
- context:
cluster: pause-319676
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:09:25 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-319676
name: pause-319676
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-319676
user:
client-certificate: /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/pause-319676/client.crt
client-key: /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/pause-319676/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-945654

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945654"

                                                
                                                
----------------------- debugLogs end: false-945654 [took: 3.253298378s] --------------------------------
helpers_test.go:175: Cleaning up "false-945654" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-945654
--- PASS: TestNetworkPlugins/group/false (3.59s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.49s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-319676 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-319676 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.46214742s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.49s)

                                                
                                    
x
+
TestPause/serial/Pause (1.1s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-319676 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-319676 --alsologtostderr -v=5: (1.100281012s)
--- PASS: TestPause/serial/Pause (1.10s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.5s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-319676 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-319676 --output=json --layout=cluster: exit status 2 (501.583275ms)

                                                
                                                
-- stdout --
	{"Name":"pause-319676","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-319676","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.50s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-319676 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-319676 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-319676 --alsologtostderr -v=5: (1.098121153s)
--- PASS: TestPause/serial/PauseAgain (1.10s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.09s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-319676 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-319676 --alsologtostderr -v=5: (3.092905457s)
--- PASS: TestPause/serial/DeletePaused (3.09s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.79s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-319676
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-319676: exit status 1 (25.622621ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-319676: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (148.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-636783 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0927 01:11:47.174036  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:12:39.280528  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-636783 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m28.071691965s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (148.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-636783 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c50d6a4e-d799-4f67-97e5-50e07234578e] Pending
helpers_test.go:344: "busybox" [c50d6a4e-d799-4f67-97e5-50e07234578e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c50d6a4e-d799-4f67-97e5-50e07234578e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.006531042s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-636783 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-636783 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-636783 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-636783 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-636783 --alsologtostderr -v=3: (12.087590377s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-636783 -n old-k8s-version-636783
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-636783 -n old-k8s-version-636783: exit status 7 (79.69076ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-636783 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (154.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-636783 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-636783 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m34.166464017s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-636783 -n old-k8s-version-636783
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (154.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-788162 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0927 01:14:50.243098  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-788162 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m8.231035052s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-788162 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3d3ad023-cc30-40cc-873b-5b9577faee7b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3d3ad023-cc30-40cc-873b-5b9577faee7b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004240391s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-788162 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-788162 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-788162 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-788162 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-788162 --alsologtostderr -v=3: (12.04524528s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-788162 -n no-preload-788162
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-788162 -n no-preload-788162: exit status 7 (73.755631ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-788162 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (290.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-788162 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0927 01:16:47.174751  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-788162 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m50.063192094s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-788162 -n no-preload-788162
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (290.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8w958" [fc63f3a9-09e7-498f-9706-1ab6a1217196] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004428478s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8w958" [fc63f3a9-09e7-498f-9706-1ab6a1217196] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.049406892s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-636783 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-636783 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-636783 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-636783 -n old-k8s-version-636783
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-636783 -n old-k8s-version-636783: exit status 2 (393.424164ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-636783 -n old-k8s-version-636783
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-636783 -n old-k8s-version-636783: exit status 2 (459.466296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-636783 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-636783 -n old-k8s-version-636783
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-636783 -n old-k8s-version-636783
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (53.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-483591 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0927 01:17:39.280951  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-483591 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (53.007015415s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (53.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-483591 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8e3e8e50-74ef-4878-a42c-aecb59bb3289] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8e3e8e50-74ef-4878-a42c-aecb59bb3289] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004018837s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-483591 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-483591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-483591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.009600283s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-483591 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-483591 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-483591 --alsologtostderr -v=3: (12.054631719s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-483591 -n embed-certs-483591
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-483591 -n embed-certs-483591: exit status 7 (67.673705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-483591 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-483591 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0927 01:18:51.005014  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:18:51.012487  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:18:51.024055  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:18:51.045530  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:18:51.086955  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:18:51.168597  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:18:51.330131  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:18:51.651808  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:18:52.293821  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:18:53.575222  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:18:56.137545  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:19:01.259724  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:19:11.501036  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:19:31.982657  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:20:12.945515  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-483591 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m26.223972472s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-483591 -n embed-certs-483591
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vn86h" [b2307d0b-aebf-464c-a931-02e4cbe04f5b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004676531s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vn86h" [b2307d0b-aebf-464c-a931-02e4cbe04f5b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003436036s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-788162 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-788162 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-788162 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-788162 -n no-preload-788162
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-788162 -n no-preload-788162: exit status 2 (322.404659ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-788162 -n no-preload-788162
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-788162 -n no-preload-788162: exit status 2 (329.526334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-788162 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-788162 -n no-preload-788162
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-788162 -n no-preload-788162
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-253513 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0927 01:21:34.867503  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:21:47.174633  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-253513 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m24.18588913s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-253513 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8765f621-3c5f-45f2-bec9-2ca9ee3268e3] Pending
helpers_test.go:344: "busybox" [8765f621-3c5f-45f2-bec9-2ca9ee3268e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8765f621-3c5f-45f2-bec9-2ca9ee3268e3] Running
E0927 01:22:39.281091  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/addons-376302/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004061567s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-253513 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-253513 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-253513 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-253513 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-253513 --alsologtostderr -v=3: (12.260845344s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7rbzf" [ca01e0d7-a8d5-4fb0-b251-1f7671664317] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004401714s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7rbzf" [ca01e0d7-a8d5-4fb0-b251-1f7671664317] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007825206s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-483591 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-253513 -n default-k8s-diff-port-253513
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-253513 -n default-k8s-diff-port-253513: exit status 7 (80.816423ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-253513 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (271.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-253513 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-253513 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m31.29836793s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-253513 -n default-k8s-diff-port-253513
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (271.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-483591 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-483591 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-483591 -n embed-certs-483591
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-483591 -n embed-certs-483591: exit status 2 (356.044555ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-483591 -n embed-certs-483591
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-483591 -n embed-certs-483591: exit status 2 (382.889743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-483591 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-483591 -n embed-certs-483591
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-483591 -n embed-certs-483591
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-312616 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-312616 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (38.575155325s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-312616 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-312616 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.274492824s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-312616 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-312616 --alsologtostderr -v=3: (1.274772479s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-312616 -n newest-cni-312616
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-312616 -n newest-cni-312616: exit status 7 (74.089889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-312616 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-312616 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0927 01:23:51.004136  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-312616 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (15.631790093s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-312616 -n newest-cni-312616
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-312616 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-312616 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-312616 -n newest-cni-312616
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-312616 -n newest-cni-312616: exit status 2 (338.684379ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-312616 -n newest-cni-312616
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-312616 -n newest-cni-312616: exit status 2 (330.234128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-312616 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-312616 -n newest-cni-312616
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-312616 -n newest-cni-312616
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (91.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-945654 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0927 01:24:18.708831  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:37.437990  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/no-preload-788162/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:37.444497  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/no-preload-788162/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:37.455892  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/no-preload-788162/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:37.477328  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/no-preload-788162/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:37.518806  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/no-preload-788162/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:37.600215  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/no-preload-788162/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:37.761608  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/no-preload-788162/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:38.083419  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/no-preload-788162/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:38.724950  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/no-preload-788162/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:40.007277  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/no-preload-788162/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:25:42.569710  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/no-preload-788162/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-945654 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m31.591895962s)
--- PASS: TestNetworkPlugins/group/auto/Start (91.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-945654 "pgrep -a kubelet"
I0927 01:25:43.952231  589083 config.go:182] Loaded profile config "auto-945654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-945654 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6tghj" [e6df6215-2be4-4ab7-9dec-02e0e847e656] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6tghj" [e6df6215-2be4-4ab7-9dec-02e0e847e656] Running
E0927 01:25:47.691626  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/no-preload-788162/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003878084s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-945654 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-945654 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-945654 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (48.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-945654 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0927 01:26:18.415174  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/no-preload-788162/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:47.173910  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:59.377031  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/no-preload-788162/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-945654 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (48.646959058s)
--- PASS: TestNetworkPlugins/group/flannel/Start (48.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-77hlg" [09fedf6a-78c9-41f1-9024-5a852ae633c9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00607216s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-945654 "pgrep -a kubelet"
I0927 01:27:09.304440  589083 config.go:182] Loaded profile config "flannel-945654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-945654 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5gfd8" [7c9c3414-969b-4f30-a1cb-04bebb0ebc87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5gfd8" [7c9c3414-969b-4f30-a1cb-04bebb0ebc87] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003736704s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-945654 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-945654 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-945654 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mghlm" [a9ebccf1-c9d7-4818-81c1-589866b9886a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003648752s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mghlm" [a9ebccf1-c9d7-4818-81c1-589866b9886a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004457583s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-253513 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-253513 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-253513 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-253513 --alsologtostderr -v=1: (1.115898429s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-253513 -n default-k8s-diff-port-253513
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-253513 -n default-k8s-diff-port-253513: exit status 2 (407.938882ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-253513 -n default-k8s-diff-port-253513
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-253513 -n default-k8s-diff-port-253513: exit status 2 (473.015503ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-253513 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-253513 -n default-k8s-diff-port-253513
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-253513 -n default-k8s-diff-port-253513
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.13s)
E0927 01:31:47.174116  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/functional-775062/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:32:03.019674  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/flannel-945654/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:32:03.026187  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/flannel-945654/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:32:03.037696  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/flannel-945654/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:32:03.059154  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/flannel-945654/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:32:03.100633  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/flannel-945654/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:32:03.182214  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/flannel-945654/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:32:03.343891  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/flannel-945654/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:32:03.665604  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/flannel-945654/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:32:04.307450  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/flannel-945654/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:32:05.588888  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/flannel-945654/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:32:06.164799  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/auto-945654/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:32:08.150620  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/flannel-945654/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:32:13.271998  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/flannel-945654/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-945654 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-945654 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m13.91814387s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-945654 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0927 01:28:21.298436  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/no-preload-788162/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:28:51.003382  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/old-k8s-version-636783/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-945654 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m2.858251825s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-945654 "pgrep -a kubelet"
I0927 01:28:52.146922  589083 config.go:182] Loaded profile config "custom-flannel-945654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-945654 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dxd22" [790346d6-756c-4d69-aada-47a0e1a430ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dxd22" [790346d6-756c-4d69-aada-47a0e1a430ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003635532s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-s4jnp" [54236f50-2c1f-484b-bdfa-2b363ed02f79] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004865324s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-945654 "pgrep -a kubelet"
I0927 01:29:00.843700  589083 config.go:182] Loaded profile config "calico-945654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-945654 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kl7pc" [7d3dbebc-d51a-4d81-bcc2-9adf3f937416] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kl7pc" [7d3dbebc-d51a-4d81-bcc2-9adf3f937416] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003923115s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-945654 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-945654 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-945654 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-945654 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-945654 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-945654 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (92.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-945654 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-945654 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m32.924184653s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (92.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (51.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-945654 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-945654 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (51.908754216s)
--- PASS: TestNetworkPlugins/group/bridge/Start (51.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-945654 "pgrep -a kubelet"
I0927 01:30:29.655546  589083 config.go:182] Loaded profile config "bridge-945654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-945654 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4k8vn" [78bcc260-3ef2-4e02-b52d-658183fa5449] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4k8vn" [78bcc260-3ef2-4e02-b52d-658183fa5449] Running
E0927 01:30:37.438750  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/no-preload-788162/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003729021s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-945654 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-945654 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-945654 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-945654 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-945654 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m15.842003635s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-fcsvh" [f4631e55-b399-4121-a2fc-b23fb34a1353] Running
E0927 01:31:04.720797  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/auto-945654/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:31:05.140252  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/no-preload-788162/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004367955s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-945654 "pgrep -a kubelet"
I0927 01:31:07.278368  589083 config.go:182] Loaded profile config "kindnet-945654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-945654 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-srq6d" [5761430e-8811-48e8-9c4c-f718a9902ffc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-srq6d" [5761430e-8811-48e8-9c4c-f718a9902ffc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004958664s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-945654 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-945654 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-945654 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-945654 "pgrep -a kubelet"
I0927 01:32:15.829385  589083 config.go:182] Loaded profile config "enable-default-cni-945654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-945654 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dl5ff" [cf8c6179-0be0-42ef-9aaa-5783ecebd5f1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dl5ff" [cf8c6179-0be0-42ef-9aaa-5783ecebd5f1] Running
E0927 01:32:23.514223  589083 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/flannel-945654/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004810465s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-945654 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-945654 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-945654 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-397635 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-397635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-397635
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-467427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-467427
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-945654 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-945654

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-945654

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-945654

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-945654

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-945654

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-945654

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-945654

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-945654

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-945654

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-945654

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-945654

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-945654" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-945654" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19711-583677/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:09:25 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-319676
contexts:
- context:
cluster: pause-319676
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:09:25 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-319676
name: pause-319676
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-319676
user:
client-certificate: /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/pause-319676/client.crt
client-key: /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/pause-319676/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-945654

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945654"

                                                
                                                
----------------------- debugLogs end: kubenet-945654 [took: 3.415471584s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-945654" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-945654
--- SKIP: TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-945654 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-945654

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-945654

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-945654

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-945654

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-945654

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-945654

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-945654

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-945654

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-945654

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-945654

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-945654

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-945654" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-945654

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-945654

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-945654

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-945654

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-945654" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-945654" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19711-583677/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:09:25 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-319676
contexts:
- context:
cluster: pause-319676
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:09:25 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-319676
name: pause-319676
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-319676
user:
client-certificate: /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/pause-319676/client.crt
client-key: /home/jenkins/minikube-integration/19711-583677/.minikube/profiles/pause-319676/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-945654

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-945654" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945654"

                                                
                                                
----------------------- debugLogs end: cilium-945654 [took: 4.185811619s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-945654" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-945654
--- SKIP: TestNetworkPlugins/group/cilium (4.34s)

                                                
                                    
Copied to clipboard