Test Report: Docker_Linux_containerd_arm64 19774

                    
                      95efbc930ecf4c942ef544a2e8709bfd2a544710:2024-10-08:36559
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 211.21
302 TestStartStop/group/old-k8s-version/serial/SecondStart 377.02
x
+
TestAddons/serial/Volcano (211.21s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:819: volcano-controller stabilized in 56.063747ms
addons_test.go:803: volcano-scheduler stabilized in 56.133338ms
addons_test.go:811: volcano-admission stabilized in 56.17912ms
addons_test.go:825: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-65r4d" [6e5bd60a-88e3-423c-921e-e94e2c7d7f4c] Running
addons_test.go:825: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003583872s
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-hpn22" [1680d8ab-4a38-4f57-ab7a-a7f00ae60556] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003754545s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-qbzrm" [546634ef-5828-4b48-b062-719b32cced22] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005387811s
addons_test.go:838: (dbg) Run:  kubectl --context addons-246349 delete -n volcano-system job volcano-admission-init
addons_test.go:844: (dbg) Run:  kubectl --context addons-246349 create -f testdata/vcjob.yaml
addons_test.go:852: (dbg) Run:  kubectl --context addons-246349 get vcjob -n my-volcano
addons_test.go:870: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a75b011b-36df-45c9-9e92-f10b1c6f3c11] Pending
helpers_test.go:344: "test-job-nginx-0" [a75b011b-36df-45c9-9e92-f10b1c6f3c11] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:870: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-246349 -n addons-246349
addons_test.go:870: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-10-08 18:07:22.342755242 +0000 UTC m=+377.788852296
addons_test.go:870: (dbg) Run:  kubectl --context addons-246349 describe po test-job-nginx-0 -n my-volcano
addons_test.go:870: (dbg) kubectl --context addons-246349 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-e0b0a74f-29dc-4939-ac64-a844d746ab21
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m27fg (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-m27fg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:870: (dbg) Run:  kubectl --context addons-246349 logs test-job-nginx-0 -n my-volcano
addons_test.go:870: (dbg) kubectl --context addons-246349 logs test-job-nginx-0 -n my-volcano:
addons_test.go:871: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-246349
helpers_test.go:235: (dbg) docker inspect addons-246349:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9855b2e0c72fce7bc52d302cdffce6e9a508a5f6676cdc9a7ec32f4295ec129",
	        "Created": "2024-10-08T18:01:53.50466759Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 289792,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-08T18:01:53.662285803Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5ca9b83e048da5ecbd9864892b13b9f06d661ec5eae41590141157c6fe62bf7",
	        "ResolvConfPath": "/var/lib/docker/containers/b9855b2e0c72fce7bc52d302cdffce6e9a508a5f6676cdc9a7ec32f4295ec129/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9855b2e0c72fce7bc52d302cdffce6e9a508a5f6676cdc9a7ec32f4295ec129/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9855b2e0c72fce7bc52d302cdffce6e9a508a5f6676cdc9a7ec32f4295ec129/hosts",
	        "LogPath": "/var/lib/docker/containers/b9855b2e0c72fce7bc52d302cdffce6e9a508a5f6676cdc9a7ec32f4295ec129/b9855b2e0c72fce7bc52d302cdffce6e9a508a5f6676cdc9a7ec32f4295ec129-json.log",
	        "Name": "/addons-246349",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-246349:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-246349",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/66e83a199c835e1ea1618fc86a0613fb863b49a68940d27301f39f12aa13878a-init/diff:/var/lib/docker/overlay2/211ed394d64374fe90b3e50a914ebed5f9b85a2e1d8650161b42163931148dcb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/66e83a199c835e1ea1618fc86a0613fb863b49a68940d27301f39f12aa13878a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/66e83a199c835e1ea1618fc86a0613fb863b49a68940d27301f39f12aa13878a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/66e83a199c835e1ea1618fc86a0613fb863b49a68940d27301f39f12aa13878a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-246349",
	                "Source": "/var/lib/docker/volumes/addons-246349/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-246349",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-246349",
	                "name.minikube.sigs.k8s.io": "addons-246349",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ba58cd093c80c58e5ed6645deebd3075792f73b3bbef2695519383d02ddbafbb",
	            "SandboxKey": "/var/run/docker/netns/ba58cd093c80",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-246349": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2a180a99cf8d2485671b427907cc072dabd588376eba9049e3d11f70ac4770c9",
	                    "EndpointID": "f78544cafbd2ce86c1d7c806a6029264f42fbaaee48bda0e72945bf9cca700c8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-246349",
	                        "b9855b2e0c72"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-246349 -n addons-246349
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-246349 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-246349 logs -n 25: (1.564869787s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-945652   | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC |                     |
	|         | -p download-only-945652              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	| delete  | -p download-only-945652              | download-only-945652   | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	| start   | -o=json --download-only              | download-only-063477   | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC |                     |
	|         | -p download-only-063477              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	| delete  | -p download-only-063477              | download-only-063477   | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	| delete  | -p download-only-945652              | download-only-945652   | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	| delete  | -p download-only-063477              | download-only-063477   | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	| start   | --download-only -p                   | download-docker-419107 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC |                     |
	|         | download-docker-419107               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-419107            | download-docker-419107 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	| start   | --download-only -p                   | binary-mirror-075119   | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC |                     |
	|         | binary-mirror-075119                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34241               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-075119              | binary-mirror-075119   | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	| addons  | disable dashboard -p                 | addons-246349          | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC |                     |
	|         | addons-246349                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-246349          | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC |                     |
	|         | addons-246349                        |                        |         |         |                     |                     |
	| start   | -p addons-246349 --wait=true         | addons-246349          | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:04 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 18:01:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 18:01:29.278418  289308 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:01:29.278619  289308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:01:29.278647  289308 out.go:358] Setting ErrFile to fd 2...
	I1008 18:01:29.278667  289308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:01:29.278949  289308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
	I1008 18:01:29.279465  289308 out.go:352] Setting JSON to false
	I1008 18:01:29.280386  289308 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6238,"bootTime":1728404252,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1008 18:01:29.280487  289308 start.go:139] virtualization:  
	I1008 18:01:29.282193  289308 out.go:177] * [addons-246349] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1008 18:01:29.283475  289308 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:01:29.283554  289308 notify.go:220] Checking for updates...
	I1008 18:01:29.285740  289308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:01:29.286922  289308 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig
	I1008 18:01:29.287950  289308 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube
	I1008 18:01:29.289102  289308 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 18:01:29.290141  289308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 18:01:29.291368  289308 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:01:29.311273  289308 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1008 18:01:29.311410  289308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 18:01:29.380635  289308 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-08 18:01:29.370884062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1008 18:01:29.380761  289308 docker.go:318] overlay module found
	I1008 18:01:29.382798  289308 out.go:177] * Using the docker driver based on user configuration
	I1008 18:01:29.384007  289308 start.go:297] selected driver: docker
	I1008 18:01:29.384034  289308 start.go:901] validating driver "docker" against <nil>
	I1008 18:01:29.384047  289308 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:01:29.384711  289308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 18:01:29.434459  289308 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-08 18:01:29.422164892 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1008 18:01:29.434663  289308 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 18:01:29.434901  289308 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:01:29.436093  289308 out.go:177] * Using Docker driver with root privileges
	I1008 18:01:29.437058  289308 cni.go:84] Creating CNI manager for ""
	I1008 18:01:29.437125  289308 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1008 18:01:29.437136  289308 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 18:01:29.437209  289308 start.go:340] cluster config:
	{Name:addons-246349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-246349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:01:29.438480  289308 out.go:177] * Starting "addons-246349" primary control-plane node in "addons-246349" cluster
	I1008 18:01:29.439600  289308 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1008 18:01:29.440837  289308 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1008 18:01:29.441867  289308 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1008 18:01:29.441921  289308 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1008 18:01:29.441933  289308 cache.go:56] Caching tarball of preloaded images
	I1008 18:01:29.441955  289308 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1008 18:01:29.442016  289308 preload.go:172] Found /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 18:01:29.442027  289308 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I1008 18:01:29.442368  289308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/config.json ...
	I1008 18:01:29.442437  289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/config.json: {Name:mk94e4f0080f368eed201b4abc12c0f546003cbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:01:29.456568  289308 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1008 18:01:29.456702  289308 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1008 18:01:29.456728  289308 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1008 18:01:29.456732  289308 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1008 18:01:29.456740  289308 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1008 18:01:29.456745  289308 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from local cache
	I1008 18:01:46.591054  289308 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from cached tarball
	I1008 18:01:46.591094  289308 cache.go:194] Successfully downloaded all kic artifacts
	I1008 18:01:46.591134  289308 start.go:360] acquireMachinesLock for addons-246349: {Name:mke529fb19b7ca87311bc65a32cc4a27a559389d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:01:46.591262  289308 start.go:364] duration metric: took 104.937µs to acquireMachinesLock for "addons-246349"
	I1008 18:01:46.591294  289308 start.go:93] Provisioning new machine with config: &{Name:addons-246349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-246349 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1008 18:01:46.591382  289308 start.go:125] createHost starting for "" (driver="docker")
	I1008 18:01:46.594434  289308 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1008 18:01:46.594682  289308 start.go:159] libmachine.API.Create for "addons-246349" (driver="docker")
	I1008 18:01:46.594717  289308 client.go:168] LocalClient.Create starting
	I1008 18:01:46.594831  289308 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem
	I1008 18:01:46.911568  289308 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/cert.pem
	I1008 18:01:47.674158  289308 cli_runner.go:164] Run: docker network inspect addons-246349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 18:01:47.688663  289308 cli_runner.go:211] docker network inspect addons-246349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 18:01:47.688754  289308 network_create.go:284] running [docker network inspect addons-246349] to gather additional debugging logs...
	I1008 18:01:47.688777  289308 cli_runner.go:164] Run: docker network inspect addons-246349
	W1008 18:01:47.704166  289308 cli_runner.go:211] docker network inspect addons-246349 returned with exit code 1
	I1008 18:01:47.704205  289308 network_create.go:287] error running [docker network inspect addons-246349]: docker network inspect addons-246349: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-246349 not found
	I1008 18:01:47.704220  289308 network_create.go:289] output of [docker network inspect addons-246349]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-246349 not found
	
	** /stderr **
	I1008 18:01:47.704330  289308 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 18:01:47.720298  289308 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400187c880}
	I1008 18:01:47.720348  289308 network_create.go:124] attempt to create docker network addons-246349 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1008 18:01:47.720408  289308 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-246349 addons-246349
	I1008 18:01:47.787152  289308 network_create.go:108] docker network addons-246349 192.168.49.0/24 created
	I1008 18:01:47.787184  289308 kic.go:121] calculated static IP "192.168.49.2" for the "addons-246349" container
	I1008 18:01:47.787269  289308 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 18:01:47.802194  289308 cli_runner.go:164] Run: docker volume create addons-246349 --label name.minikube.sigs.k8s.io=addons-246349 --label created_by.minikube.sigs.k8s.io=true
	I1008 18:01:47.818822  289308 oci.go:103] Successfully created a docker volume addons-246349
	I1008 18:01:47.818922  289308 cli_runner.go:164] Run: docker run --rm --name addons-246349-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-246349 --entrypoint /usr/bin/test -v addons-246349:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib
	I1008 18:01:49.397817  289308 cli_runner.go:217] Completed: docker run --rm --name addons-246349-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-246349 --entrypoint /usr/bin/test -v addons-246349:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib: (1.578853468s)
	I1008 18:01:49.397847  289308 oci.go:107] Successfully prepared a docker volume addons-246349
	I1008 18:01:49.397868  289308 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1008 18:01:49.397887  289308 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 18:01:49.397955  289308 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-246349:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 18:01:53.438303  289308 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-246349:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir: (4.040309943s)
	I1008 18:01:53.438342  289308 kic.go:203] duration metric: took 4.040451094s to extract preloaded images to volume ...
	W1008 18:01:53.438475  289308 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1008 18:01:53.438583  289308 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 18:01:53.490414  289308 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-246349 --name addons-246349 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-246349 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-246349 --network addons-246349 --ip 192.168.49.2 --volume addons-246349:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec
	I1008 18:01:53.827521  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Running}}
	I1008 18:01:53.845576  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:01:53.870112  289308 cli_runner.go:164] Run: docker exec addons-246349 stat /var/lib/dpkg/alternatives/iptables
	I1008 18:01:53.952672  289308 oci.go:144] the created container "addons-246349" has a running status.
	I1008 18:01:53.952699  289308 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa...
	I1008 18:01:54.173506  289308 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 18:01:54.193593  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:01:54.221734  289308 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 18:01:54.221756  289308 kic_runner.go:114] Args: [docker exec --privileged addons-246349 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 18:01:54.298564  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:01:54.335312  289308 machine.go:93] provisionDockerMachine start ...
	I1008 18:01:54.335406  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:01:54.365052  289308 main.go:141] libmachine: Using SSH client type: native
	I1008 18:01:54.365311  289308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1008 18:01:54.365327  289308 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 18:01:54.365985  289308 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1008 18:01:57.497335  289308 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-246349
	
	I1008 18:01:57.497366  289308 ubuntu.go:169] provisioning hostname "addons-246349"
	I1008 18:01:57.497440  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:01:57.520350  289308 main.go:141] libmachine: Using SSH client type: native
	I1008 18:01:57.520617  289308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1008 18:01:57.520636  289308 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-246349 && echo "addons-246349" | sudo tee /etc/hostname
	I1008 18:01:57.661825  289308 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-246349
	
	I1008 18:01:57.661909  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:01:57.680856  289308 main.go:141] libmachine: Using SSH client type: native
	I1008 18:01:57.681110  289308 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1008 18:01:57.681132  289308 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-246349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-246349/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-246349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 18:01:57.809766  289308 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 18:01:57.809795  289308 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19774-283126/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-283126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-283126/.minikube}
	I1008 18:01:57.809816  289308 ubuntu.go:177] setting up certificates
	I1008 18:01:57.809826  289308 provision.go:84] configureAuth start
	I1008 18:01:57.809887  289308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-246349
	I1008 18:01:57.826322  289308 provision.go:143] copyHostCerts
	I1008 18:01:57.826403  289308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-283126/.minikube/ca.pem (1078 bytes)
	I1008 18:01:57.826563  289308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-283126/.minikube/cert.pem (1123 bytes)
	I1008 18:01:57.826633  289308 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-283126/.minikube/key.pem (1679 bytes)
	I1008 18:01:57.826687  289308 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-283126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca-key.pem org=jenkins.addons-246349 san=[127.0.0.1 192.168.49.2 addons-246349 localhost minikube]
	I1008 18:01:58.107470  289308 provision.go:177] copyRemoteCerts
	I1008 18:01:58.107542  289308 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 18:01:58.107583  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:01:58.126075  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:01:58.218625  289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 18:01:58.243252  289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 18:01:58.268504  289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 18:01:58.293150  289308 provision.go:87] duration metric: took 483.298553ms to configureAuth
	I1008 18:01:58.293177  289308 ubuntu.go:193] setting minikube options for container-runtime
	I1008 18:01:58.293390  289308 config.go:182] Loaded profile config "addons-246349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1008 18:01:58.293402  289308 machine.go:96] duration metric: took 3.958071381s to provisionDockerMachine
	I1008 18:01:58.293409  289308 client.go:171] duration metric: took 11.698680459s to LocalClient.Create
	I1008 18:01:58.293429  289308 start.go:167] duration metric: took 11.698747441s to libmachine.API.Create "addons-246349"
	I1008 18:01:58.293440  289308 start.go:293] postStartSetup for "addons-246349" (driver="docker")
	I1008 18:01:58.293449  289308 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 18:01:58.293504  289308 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 18:01:58.293548  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:01:58.309976  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:01:58.402718  289308 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 18:01:58.406162  289308 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 18:01:58.406200  289308 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1008 18:01:58.406238  289308 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1008 18:01:58.406253  289308 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1008 18:01:58.406263  289308 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-283126/.minikube/addons for local assets ...
	I1008 18:01:58.406338  289308 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-283126/.minikube/files for local assets ...
	I1008 18:01:58.406372  289308 start.go:296] duration metric: took 112.92593ms for postStartSetup
	I1008 18:01:58.406688  289308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-246349
	I1008 18:01:58.422340  289308 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/config.json ...
	I1008 18:01:58.422632  289308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 18:01:58.422682  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:01:58.439210  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:01:58.531022  289308 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 18:01:58.535776  289308 start.go:128] duration metric: took 11.944377164s to createHost
	I1008 18:01:58.535801  289308 start.go:83] releasing machines lock for "addons-246349", held for 11.94452606s
	I1008 18:01:58.535873  289308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-246349
	I1008 18:01:58.552606  289308 ssh_runner.go:195] Run: cat /version.json
	I1008 18:01:58.552623  289308 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 18:01:58.552662  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:01:58.552720  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:01:58.572193  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:01:58.586133  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:01:58.665133  289308 ssh_runner.go:195] Run: systemctl --version
	I1008 18:01:58.797409  289308 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1008 18:01:58.801727  289308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1008 18:01:58.827252  289308 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1008 18:01:58.827330  289308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 18:01:58.857318  289308 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1008 18:01:58.857341  289308 start.go:495] detecting cgroup driver to use...
	I1008 18:01:58.857381  289308 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 18:01:58.857437  289308 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1008 18:01:58.870073  289308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1008 18:01:58.882083  289308 docker.go:217] disabling cri-docker service (if available) ...
	I1008 18:01:58.882147  289308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 18:01:58.896771  289308 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 18:01:58.911604  289308 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 18:01:58.999715  289308 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 18:01:59.093807  289308 docker.go:233] disabling docker service ...
	I1008 18:01:59.093879  289308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 18:01:59.114299  289308 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 18:01:59.126268  289308 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 18:01:59.214638  289308 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 18:01:59.303331  289308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 18:01:59.314816  289308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 18:01:59.330641  289308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1008 18:01:59.340503  289308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1008 18:01:59.350162  289308 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1008 18:01:59.350276  289308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1008 18:01:59.360931  289308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 18:01:59.370913  289308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1008 18:01:59.380725  289308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 18:01:59.390312  289308 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 18:01:59.399480  289308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1008 18:01:59.409528  289308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1008 18:01:59.419248  289308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1008 18:01:59.429322  289308 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 18:01:59.437961  289308 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 18:01:59.446851  289308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:01:59.538332  289308 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1008 18:01:59.668537  289308 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1008 18:01:59.668693  289308 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1008 18:01:59.672173  289308 start.go:563] Will wait 60s for crictl version
	I1008 18:01:59.672236  289308 ssh_runner.go:195] Run: which crictl
	I1008 18:01:59.675556  289308 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 18:01:59.716433  289308 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1008 18:01:59.716518  289308 ssh_runner.go:195] Run: containerd --version
	I1008 18:01:59.738929  289308 ssh_runner.go:195] Run: containerd --version
	I1008 18:01:59.767370  289308 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I1008 18:01:59.770201  289308 cli_runner.go:164] Run: docker network inspect addons-246349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 18:01:59.785784  289308 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 18:01:59.789412  289308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 18:01:59.800144  289308 kubeadm.go:883] updating cluster {Name:addons-246349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-246349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 18:01:59.800275  289308 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1008 18:01:59.800342  289308 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:01:59.836516  289308 containerd.go:627] all images are preloaded for containerd runtime.
	I1008 18:01:59.836540  289308 containerd.go:534] Images already preloaded, skipping extraction
	I1008 18:01:59.836599  289308 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:01:59.875826  289308 containerd.go:627] all images are preloaded for containerd runtime.
	I1008 18:01:59.875850  289308 cache_images.go:84] Images are preloaded, skipping loading
	I1008 18:01:59.875858  289308 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I1008 18:01:59.875951  289308 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-246349 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-246349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 18:01:59.876020  289308 ssh_runner.go:195] Run: sudo crictl info
	I1008 18:01:59.912459  289308 cni.go:84] Creating CNI manager for ""
	I1008 18:01:59.912485  289308 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1008 18:01:59.912495  289308 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 18:01:59.912518  289308 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-246349 NodeName:addons-246349 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 18:01:59.912650  289308 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-246349"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 18:01:59.912723  289308 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 18:01:59.921638  289308 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 18:01:59.921731  289308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 18:01:59.930744  289308 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1008 18:01:59.949112  289308 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 18:01:59.967515  289308 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I1008 18:01:59.987043  289308 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 18:01:59.990585  289308 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 18:02:00.002324  289308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:02:00.093478  289308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:02:00.112654  289308 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349 for IP: 192.168.49.2
	I1008 18:02:00.112683  289308 certs.go:194] generating shared ca certs ...
	I1008 18:02:00.112705  289308 certs.go:226] acquiring lock for ca certs: {Name:mk9b4a4bb626944e2ef6352dc46232c13e820586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:02:00.112861  289308 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-283126/.minikube/ca.key
	I1008 18:02:01.095619  289308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-283126/.minikube/ca.crt ...
	I1008 18:02:01.095656  289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/ca.crt: {Name:mk6969eb7cf1a3587be1795d424d67277866ca0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:02:01.095886  289308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-283126/.minikube/ca.key ...
	I1008 18:02:01.095901  289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/ca.key: {Name:mk4e91d6155c29d94b5277a3c747b1852e798f11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:02:01.095996  289308 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.key
	I1008 18:02:01.550152  289308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.crt ...
	I1008 18:02:01.550184  289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.crt: {Name:mkd127f52a9e243d3bf49581033f9c43927a305f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:02:01.550389  289308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.key ...
	I1008 18:02:01.550405  289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.key: {Name:mk5534a5a90f70d374aace592195b18ea32d220f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:02:01.550487  289308 certs.go:256] generating profile certs ...
	I1008 18:02:01.550552  289308 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.key
	I1008 18:02:01.550580  289308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt with IP's: []
	I1008 18:02:02.021490  289308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt ...
	I1008 18:02:02.021522  289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: {Name:mk1373c5dc4bbc33d45f7cfe069209ca7c0c5fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:02:02.021722  289308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.key ...
	I1008 18:02:02.021737  289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.key: {Name:mkc8101c3cf4d35b1bf598206c9e6092646c5995 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:02:02.021823  289308 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.key.3c3158c0
	I1008 18:02:02.021845  289308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.crt.3c3158c0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1008 18:02:02.296153  289308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.crt.3c3158c0 ...
	I1008 18:02:02.296183  289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.crt.3c3158c0: {Name:mk7491c5231d1f7adeb0cab2720c5ac4f612baed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:02:02.296736  289308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.key.3c3158c0 ...
	I1008 18:02:02.296755  289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.key.3c3158c0: {Name:mk5d12e162a35a1810c72cce431d4b479dc6c40d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:02:02.296854  289308 certs.go:381] copying /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.crt.3c3158c0 -> /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.crt
	I1008 18:02:02.296936  289308 certs.go:385] copying /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.key.3c3158c0 -> /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.key
	I1008 18:02:02.296992  289308 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/proxy-client.key
	I1008 18:02:02.297012  289308 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/proxy-client.crt with IP's: []
	I1008 18:02:02.774343  289308 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/proxy-client.crt ...
	I1008 18:02:02.774375  289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/proxy-client.crt: {Name:mka0a3694f2abba948f1a2cff851748ae260ee68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:02:02.774563  289308 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/proxy-client.key ...
	I1008 18:02:02.774578  289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/proxy-client.key: {Name:mkac479fa4598dd9d4a98c039c1642b5c0032f12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:02:02.774770  289308 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 18:02:02.774815  289308 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem (1078 bytes)
	I1008 18:02:02.774845  289308 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/cert.pem (1123 bytes)
	I1008 18:02:02.774878  289308 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/key.pem (1679 bytes)
	I1008 18:02:02.775503  289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 18:02:02.800021  289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 18:02:02.824552  289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 18:02:02.848749  289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1008 18:02:02.872633  289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1008 18:02:02.895915  289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 18:02:02.919464  289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 18:02:02.943946  289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 18:02:02.968294  289308 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 18:02:02.992275  289308 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 18:02:03.010179  289308 ssh_runner.go:195] Run: openssl version
	I1008 18:02:03.015711  289308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 18:02:03.025098  289308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:02:03.029017  289308 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 18:02 /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:02:03.029092  289308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:02:03.036092  289308 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 18:02:03.045912  289308 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 18:02:03.049314  289308 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 18:02:03.049360  289308 kubeadm.go:392] StartCluster: {Name:addons-246349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-246349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:02:03.049460  289308 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1008 18:02:03.049532  289308 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 18:02:03.087401  289308 cri.go:89] found id: ""
	I1008 18:02:03.087473  289308 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 18:02:03.100262  289308 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 18:02:03.109244  289308 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 18:02:03.109315  289308 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 18:02:03.120882  289308 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 18:02:03.120905  289308 kubeadm.go:157] found existing configuration files:
	
	I1008 18:02:03.120956  289308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 18:02:03.130866  289308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 18:02:03.130939  289308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 18:02:03.139493  289308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 18:02:03.148879  289308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 18:02:03.148953  289308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 18:02:03.158295  289308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 18:02:03.167468  289308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 18:02:03.167534  289308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 18:02:03.175939  289308 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 18:02:03.184741  289308 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 18:02:03.184818  289308 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 18:02:03.193713  289308 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 18:02:03.234276  289308 kubeadm.go:310] W1008 18:02:03.233565    1031 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 18:02:03.235106  289308 kubeadm.go:310] W1008 18:02:03.234598    1031 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 18:02:03.258811  289308 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1008 18:02:03.318083  289308 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 18:02:20.675326  289308 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1008 18:02:20.675386  289308 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 18:02:20.675483  289308 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1008 18:02:20.675544  289308 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1008 18:02:20.675584  289308 kubeadm.go:310] OS: Linux
	I1008 18:02:20.675634  289308 kubeadm.go:310] CGROUPS_CPU: enabled
	I1008 18:02:20.675684  289308 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1008 18:02:20.675734  289308 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1008 18:02:20.675806  289308 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1008 18:02:20.675865  289308 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1008 18:02:20.675937  289308 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1008 18:02:20.675986  289308 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1008 18:02:20.676052  289308 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1008 18:02:20.676114  289308 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1008 18:02:20.676202  289308 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 18:02:20.676304  289308 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 18:02:20.676411  289308 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 18:02:20.676480  289308 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 18:02:20.679221  289308 out.go:235]   - Generating certificates and keys ...
	I1008 18:02:20.679319  289308 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 18:02:20.679389  289308 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 18:02:20.679460  289308 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 18:02:20.679519  289308 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1008 18:02:20.679582  289308 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1008 18:02:20.679635  289308 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1008 18:02:20.679701  289308 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1008 18:02:20.679820  289308 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-246349 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 18:02:20.679875  289308 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1008 18:02:20.679998  289308 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-246349 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 18:02:20.680067  289308 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 18:02:20.680133  289308 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 18:02:20.680181  289308 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1008 18:02:20.680239  289308 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 18:02:20.680293  289308 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 18:02:20.680352  289308 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 18:02:20.680412  289308 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 18:02:20.680478  289308 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 18:02:20.680535  289308 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 18:02:20.680618  289308 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 18:02:20.680688  289308 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 18:02:20.683385  289308 out.go:235]   - Booting up control plane ...
	I1008 18:02:20.683493  289308 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 18:02:20.683581  289308 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 18:02:20.683652  289308 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 18:02:20.683756  289308 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 18:02:20.683844  289308 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 18:02:20.683888  289308 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 18:02:20.684019  289308 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 18:02:20.684125  289308 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 18:02:20.684197  289308 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.500856043s
	I1008 18:02:20.684272  289308 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1008 18:02:20.684333  289308 kubeadm.go:310] [api-check] The API server is healthy after 6.001281442s
	I1008 18:02:20.684442  289308 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 18:02:20.684569  289308 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 18:02:20.684631  289308 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 18:02:20.684811  289308 kubeadm.go:310] [mark-control-plane] Marking the node addons-246349 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 18:02:20.684871  289308 kubeadm.go:310] [bootstrap-token] Using token: 0kq8kp.ln4racqss42qwugy
	I1008 18:02:20.687570  289308 out.go:235]   - Configuring RBAC rules ...
	I1008 18:02:20.687707  289308 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 18:02:20.687818  289308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 18:02:20.687994  289308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 18:02:20.688189  289308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 18:02:20.688332  289308 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 18:02:20.688457  289308 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 18:02:20.688588  289308 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 18:02:20.688646  289308 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 18:02:20.688706  289308 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 18:02:20.688717  289308 kubeadm.go:310] 
	I1008 18:02:20.688786  289308 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 18:02:20.688795  289308 kubeadm.go:310] 
	I1008 18:02:20.688873  289308 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 18:02:20.688881  289308 kubeadm.go:310] 
	I1008 18:02:20.688924  289308 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 18:02:20.688987  289308 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 18:02:20.689038  289308 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 18:02:20.689042  289308 kubeadm.go:310] 
	I1008 18:02:20.689101  289308 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 18:02:20.689106  289308 kubeadm.go:310] 
	I1008 18:02:20.689156  289308 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 18:02:20.689162  289308 kubeadm.go:310] 
	I1008 18:02:20.689221  289308 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 18:02:20.689297  289308 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 18:02:20.689376  289308 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 18:02:20.689396  289308 kubeadm.go:310] 
	I1008 18:02:20.689495  289308 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 18:02:20.689580  289308 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 18:02:20.689588  289308 kubeadm.go:310] 
	I1008 18:02:20.689700  289308 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0kq8kp.ln4racqss42qwugy \
	I1008 18:02:20.689806  289308 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b97bb3e8e417b962d820ebc093937d5128e022499abe774f12128a2d4bef5329 \
	I1008 18:02:20.689835  289308 kubeadm.go:310] 	--control-plane 
	I1008 18:02:20.689845  289308 kubeadm.go:310] 
	I1008 18:02:20.689961  289308 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 18:02:20.689979  289308 kubeadm.go:310] 
	I1008 18:02:20.690095  289308 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0kq8kp.ln4racqss42qwugy \
	I1008 18:02:20.690264  289308 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b97bb3e8e417b962d820ebc093937d5128e022499abe774f12128a2d4bef5329 
	I1008 18:02:20.690280  289308 cni.go:84] Creating CNI manager for ""
	I1008 18:02:20.690299  289308 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1008 18:02:20.694837  289308 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1008 18:02:20.697474  289308 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 18:02:20.702106  289308 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1008 18:02:20.702128  289308 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 18:02:20.720647  289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 18:02:20.997710  289308 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 18:02:20.997862  289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-246349 minikube.k8s.io/updated_at=2024_10_08T18_02_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=addons-246349 minikube.k8s.io/primary=true
	I1008 18:02:20.997866  289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:02:21.005977  289308 ops.go:34] apiserver oom_adj: -16
	I1008 18:02:21.129688  289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:02:21.630397  289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:02:22.129815  289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:02:22.630402  289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:02:23.130268  289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:02:23.630719  289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:02:24.129716  289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:02:24.629768  289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:02:25.130671  289308 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:02:25.241765  289308 kubeadm.go:1113] duration metric: took 4.243964247s to wait for elevateKubeSystemPrivileges
	I1008 18:02:25.241793  289308 kubeadm.go:394] duration metric: took 22.19243706s to StartCluster
	I1008 18:02:25.241810  289308 settings.go:142] acquiring lock: {Name:mk88999f347ab2e93b53f54a6e8df12c27df7c8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:02:25.241932  289308 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-283126/kubeconfig
	I1008 18:02:25.242321  289308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/kubeconfig: {Name:mkc40596aa3771ba8a6c8897a16b531991d7bc01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:02:25.242925  289308 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1008 18:02:25.243057  289308 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 18:02:25.243297  289308 config.go:182] Loaded profile config "addons-246349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1008 18:02:25.243324  289308 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1008 18:02:25.243395  289308 addons.go:69] Setting yakd=true in profile "addons-246349"
	I1008 18:02:25.243409  289308 addons.go:234] Setting addon yakd=true in "addons-246349"
	I1008 18:02:25.243431  289308 host.go:66] Checking if "addons-246349" exists ...
	I1008 18:02:25.243955  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:25.244341  289308 addons.go:69] Setting metrics-server=true in profile "addons-246349"
	I1008 18:02:25.244380  289308 addons.go:234] Setting addon metrics-server=true in "addons-246349"
	I1008 18:02:25.244407  289308 host.go:66] Checking if "addons-246349" exists ...
	I1008 18:02:25.244852  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:25.246732  289308 addons.go:69] Setting cloud-spanner=true in profile "addons-246349"
	I1008 18:02:25.247776  289308 addons.go:234] Setting addon cloud-spanner=true in "addons-246349"
	I1008 18:02:25.247937  289308 host.go:66] Checking if "addons-246349" exists ...
	I1008 18:02:25.248517  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:25.249401  289308 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-246349"
	I1008 18:02:25.249483  289308 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-246349"
	I1008 18:02:25.249541  289308 host.go:66] Checking if "addons-246349" exists ...
	I1008 18:02:25.250066  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:25.256052  289308 addons.go:69] Setting default-storageclass=true in profile "addons-246349"
	I1008 18:02:25.256141  289308 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-246349"
	I1008 18:02:25.256562  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:25.247693  289308 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-246349"
	I1008 18:02:25.256964  289308 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-246349"
	I1008 18:02:25.256997  289308 host.go:66] Checking if "addons-246349" exists ...
	I1008 18:02:25.257444  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:25.257595  289308 addons.go:69] Setting gcp-auth=true in profile "addons-246349"
	I1008 18:02:25.257618  289308 mustload.go:65] Loading cluster: addons-246349
	I1008 18:02:25.257949  289308 config.go:182] Loaded profile config "addons-246349": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1008 18:02:25.258195  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:25.247702  289308 addons.go:69] Setting registry=true in profile "addons-246349"
	I1008 18:02:25.263705  289308 addons.go:234] Setting addon registry=true in "addons-246349"
	I1008 18:02:25.263829  289308 host.go:66] Checking if "addons-246349" exists ...
	I1008 18:02:25.264482  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:25.273776  289308 addons.go:69] Setting ingress=true in profile "addons-246349"
	I1008 18:02:25.273870  289308 addons.go:234] Setting addon ingress=true in "addons-246349"
	I1008 18:02:25.273952  289308 host.go:66] Checking if "addons-246349" exists ...
	I1008 18:02:25.247708  289308 addons.go:69] Setting storage-provisioner=true in profile "addons-246349"
	I1008 18:02:25.274644  289308 addons.go:234] Setting addon storage-provisioner=true in "addons-246349"
	I1008 18:02:25.274729  289308 host.go:66] Checking if "addons-246349" exists ...
	I1008 18:02:25.276353  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:25.247717  289308 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-246349"
	I1008 18:02:25.277255  289308 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-246349"
	I1008 18:02:25.247721  289308 addons.go:69] Setting volcano=true in profile "addons-246349"
	I1008 18:02:25.277888  289308 addons.go:234] Setting addon volcano=true in "addons-246349"
	I1008 18:02:25.278023  289308 host.go:66] Checking if "addons-246349" exists ...
	I1008 18:02:25.276424  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:25.247756  289308 out.go:177] * Verifying Kubernetes components...
	I1008 18:02:25.317411  289308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:02:25.318374  289308 addons.go:69] Setting ingress-dns=true in profile "addons-246349"
	I1008 18:02:25.318590  289308 addons.go:234] Setting addon ingress-dns=true in "addons-246349"
	I1008 18:02:25.318695  289308 host.go:66] Checking if "addons-246349" exists ...
	I1008 18:02:25.319315  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:25.247724  289308 addons.go:69] Setting volumesnapshots=true in profile "addons-246349"
	I1008 18:02:25.335526  289308 addons.go:234] Setting addon volumesnapshots=true in "addons-246349"
	I1008 18:02:25.335682  289308 host.go:66] Checking if "addons-246349" exists ...
	I1008 18:02:25.336279  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:25.368318  289308 addons.go:69] Setting inspektor-gadget=true in profile "addons-246349"
	I1008 18:02:25.368351  289308 addons.go:234] Setting addon inspektor-gadget=true in "addons-246349"
	I1008 18:02:25.368392  289308 host.go:66] Checking if "addons-246349" exists ...
	I1008 18:02:25.368973  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:25.400736  289308 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1008 18:02:25.405190  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:25.428843  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:25.434426  289308 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1008 18:02:25.434690  289308 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 18:02:25.434707  289308 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 18:02:25.434797  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:02:25.469243  289308 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1008 18:02:25.469318  289308 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1008 18:02:25.473426  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:02:25.477103  289308 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1008 18:02:25.477339  289308 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1008 18:02:25.480299  289308 host.go:66] Checking if "addons-246349" exists ...
	I1008 18:02:25.513876  289308 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1008 18:02:25.514065  289308 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1008 18:02:25.515055  289308 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1008 18:02:25.515073  289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1008 18:02:25.515142  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:02:25.520176  289308 addons.go:234] Setting addon default-storageclass=true in "addons-246349"
	I1008 18:02:25.531070  289308 host.go:66] Checking if "addons-246349" exists ...
	I1008 18:02:25.529263  289308 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1008 18:02:25.529282  289308 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1008 18:02:25.531847  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:25.537869  289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1008 18:02:25.538348  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:02:25.547417  289308 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1008 18:02:25.537943  289308 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1008 18:02:25.547470  289308 out.go:177]   - Using image docker.io/registry:2.8.3
	I1008 18:02:25.550836  289308 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1008 18:02:25.547478  289308 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 18:02:25.551076  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:02:25.547484  289308 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1008 18:02:25.551586  289308 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1008 18:02:25.551855  289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1008 18:02:25.551945  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:02:25.573603  289308 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1008 18:02:25.573772  289308 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1008 18:02:25.581903  289308 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 18:02:25.581932  289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 18:02:25.582003  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:02:25.589006  289308 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-246349"
	I1008 18:02:25.589098  289308 host.go:66] Checking if "addons-246349" exists ...
	I1008 18:02:25.589565  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:25.616815  289308 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1008 18:02:25.619628  289308 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1008 18:02:25.619650  289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1008 18:02:25.619718  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:02:25.622775  289308 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1008 18:02:25.625493  289308 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1008 18:02:25.632973  289308 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1008 18:02:25.634288  289308 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1008 18:02:25.634310  289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1008 18:02:25.634398  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:02:25.638305  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:02:25.643644  289308 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1008 18:02:25.643719  289308 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1008 18:02:25.647977  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:02:25.649615  289308 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1008 18:02:25.649641  289308 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1008 18:02:25.649848  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:02:25.662287  289308 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1008 18:02:25.665110  289308 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1008 18:02:25.671900  289308 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1008 18:02:25.671924  289308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1008 18:02:25.672002  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:02:25.705203  289308 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I1008 18:02:25.713855  289308 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I1008 18:02:25.717761  289308 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I1008 18:02:25.728928  289308 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1008 18:02:25.728951  289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I1008 18:02:25.729021  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:02:25.729502  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:02:25.759742  289308 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 18:02:25.759765  289308 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 18:02:25.759828  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:02:25.768802  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:02:25.769609  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:02:25.785085  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:02:25.785202  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:02:25.786905  289308 out.go:177]   - Using image docker.io/busybox:stable
	I1008 18:02:25.793136  289308 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1008 18:02:25.799449  289308 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1008 18:02:25.799472  289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1008 18:02:25.799539  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:02:25.844143  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:02:25.872160  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:02:25.875786  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:02:25.888058  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:02:25.888799  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:02:25.890127  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	W1008 18:02:25.893932  289308 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1008 18:02:25.893961  289308 retry.go:31] will retry after 177.194408ms: ssh: handshake failed: EOF
	I1008 18:02:25.902732  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:02:26.309381  289308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:02:26.309508  289308 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.066432912s)
	I1008 18:02:26.309735  289308 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 18:02:26.323142  289308 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1008 18:02:26.323162  289308 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1008 18:02:26.386165  289308 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1008 18:02:26.386234  289308 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1008 18:02:26.558849  289308 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1008 18:02:26.558870  289308 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1008 18:02:26.570907  289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1008 18:02:26.602941  289308 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1008 18:02:26.603006  289308 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1008 18:02:26.619588  289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1008 18:02:26.627916  289308 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1008 18:02:26.627945  289308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1008 18:02:26.642559  289308 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1008 18:02:26.642587  289308 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1008 18:02:26.652091  289308 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 18:02:26.652124  289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1008 18:02:26.664379  289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 18:02:26.719683  289308 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1008 18:02:26.719709  289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1008 18:02:26.729248  289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1008 18:02:26.730692  289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1008 18:02:26.778709  289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1008 18:02:26.813132  289308 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1008 18:02:26.813161  289308 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1008 18:02:26.829269  289308 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1008 18:02:26.829298  289308 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1008 18:02:26.864306  289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 18:02:26.876827  289308 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1008 18:02:26.876856  289308 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1008 18:02:26.924619  289308 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1008 18:02:26.924652  289308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1008 18:02:26.971450  289308 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 18:02:26.971512  289308 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 18:02:26.992496  289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1008 18:02:27.037277  289308 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1008 18:02:27.037324  289308 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1008 18:02:27.088211  289308 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1008 18:02:27.088253  289308 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1008 18:02:27.140198  289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1008 18:02:27.170808  289308 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1008 18:02:27.170836  289308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1008 18:02:27.207675  289308 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1008 18:02:27.207699  289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1008 18:02:27.212796  289308 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 18:02:27.212820  289308 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 18:02:27.245891  289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1008 18:02:27.277990  289308 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1008 18:02:27.278013  289308 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1008 18:02:27.346165  289308 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1008 18:02:27.346187  289308 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1008 18:02:27.385334  289308 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1008 18:02:27.385357  289308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1008 18:02:27.438941  289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 18:02:27.470830  289308 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1008 18:02:27.470909  289308 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1008 18:02:27.534797  289308 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1008 18:02:27.534872  289308 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1008 18:02:27.597321  289308 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 18:02:27.597403  289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1008 18:02:27.639973  289308 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1008 18:02:27.640052  289308 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1008 18:02:27.722545  289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 18:02:27.766993  289308 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1008 18:02:27.767069  289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1008 18:02:27.779359  289308 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1008 18:02:27.779438  289308 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1008 18:02:27.920743  289308 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1008 18:02:27.920822  289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1008 18:02:27.971136  289308 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.661370535s)
	I1008 18:02:27.971252  289308 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.66184293s)
	I1008 18:02:27.971219  289308 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1008 18:02:27.973404  289308 node_ready.go:35] waiting up to 6m0s for node "addons-246349" to be "Ready" ...
	I1008 18:02:27.977906  289308 node_ready.go:49] node "addons-246349" has status "Ready":"True"
	I1008 18:02:27.977978  289308 node_ready.go:38] duration metric: took 4.504105ms for node "addons-246349" to be "Ready" ...
	I1008 18:02:27.978004  289308 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 18:02:27.992605  289308 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vxnx7" in "kube-system" namespace to be "Ready" ...
	I1008 18:02:28.165417  289308 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1008 18:02:28.165506  289308 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1008 18:02:28.304573  289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1008 18:02:28.475505  289308 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-246349" context rescaled to 1 replicas
	I1008 18:02:28.482585  289308 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1008 18:02:28.482653  289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1008 18:02:28.908153  289308 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1008 18:02:28.908174  289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1008 18:02:29.358635  289308 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1008 18:02:29.358710  289308 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1008 18:02:29.661783  289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1008 18:02:30.003579  289308 pod_ready.go:103] pod "coredns-7c65d6cfc9-vxnx7" in "kube-system" namespace has status "Ready":"False"
	I1008 18:02:32.020344  289308 pod_ready.go:103] pod "coredns-7c65d6cfc9-vxnx7" in "kube-system" namespace has status "Ready":"False"
	I1008 18:02:32.740229  289308 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1008 18:02:32.740315  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:02:32.767059  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:02:33.065695  289308 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1008 18:02:33.242637  289308 addons.go:234] Setting addon gcp-auth=true in "addons-246349"
	I1008 18:02:33.242742  289308 host.go:66] Checking if "addons-246349" exists ...
	I1008 18:02:33.243312  289308 cli_runner.go:164] Run: docker container inspect addons-246349 --format={{.State.Status}}
	I1008 18:02:33.267488  289308 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1008 18:02:33.267541  289308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246349
	I1008 18:02:33.305427  289308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/addons-246349/id_rsa Username:docker}
	I1008 18:02:34.522724  289308 pod_ready.go:103] pod "coredns-7c65d6cfc9-vxnx7" in "kube-system" namespace has status "Ready":"False"
	I1008 18:02:36.069567  289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.449939286s)
	I1008 18:02:36.069617  289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.405214271s)
	I1008 18:02:36.069857  289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.340583538s)
	I1008 18:02:36.069894  289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.339181697s)
	I1008 18:02:36.069946  289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.291212479s)
	I1008 18:02:36.070069  289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.205738359s)
	I1008 18:02:36.070162  289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.077605735s)
	I1008 18:02:36.070176  289308 addons.go:475] Verifying addon ingress=true in "addons-246349"
	I1008 18:02:36.070308  289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.49933368s)
	I1008 18:02:36.070357  289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.930131321s)
	I1008 18:02:36.070371  289308 addons.go:475] Verifying addon registry=true in "addons-246349"
	I1008 18:02:36.070707  289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.824777843s)
	I1008 18:02:36.071133  289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.632116065s)
	I1008 18:02:36.071162  289308 addons.go:475] Verifying addon metrics-server=true in "addons-246349"
	I1008 18:02:36.071261  289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.348626898s)
	W1008 18:02:36.071291  289308 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1008 18:02:36.071307  289308 retry.go:31] will retry after 127.618052ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1008 18:02:36.071384  289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.766729088s)
	I1008 18:02:36.071562  289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.409702163s)
	I1008 18:02:36.071576  289308 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-246349"
	I1008 18:02:36.071732  289308 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.804222853s)
	I1008 18:02:36.072926  289308 out.go:177] * Verifying ingress addon...
	I1008 18:02:36.074270  289308 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-246349 service yakd-dashboard -n yakd-dashboard
	
	I1008 18:02:36.074301  289308 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1008 18:02:36.074314  289308 out.go:177] * Verifying csi-hostpath-driver addon...
	I1008 18:02:36.074335  289308 out.go:177] * Verifying registry addon...
	I1008 18:02:36.076225  289308 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1008 18:02:36.078526  289308 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1008 18:02:36.079538  289308 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1008 18:02:36.080952  289308 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1008 18:02:36.081979  289308 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1008 18:02:36.082002  289308 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1008 18:02:36.111483  289308 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1008 18:02:36.111512  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:36.112913  289308 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1008 18:02:36.112940  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:36.113878  289308 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1008 18:02:36.113903  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1008 18:02:36.162527  289308 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I1008 18:02:36.168723  289308 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1008 18:02:36.168750  289308 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1008 18:02:36.199481  289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 18:02:36.266587  289308 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1008 18:02:36.266609  289308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1008 18:02:36.326074  289308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1008 18:02:36.585892  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:36.586808  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:36.587941  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:37.000140  289308 pod_ready.go:103] pod "coredns-7c65d6cfc9-vxnx7" in "kube-system" namespace has status "Ready":"False"
	I1008 18:02:37.083229  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:37.084935  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:37.086565  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:37.593620  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:37.595023  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:37.597137  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:37.867675  289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.668147516s)
	I1008 18:02:37.867762  289308 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.541666501s)
	I1008 18:02:37.870478  289308 addons.go:475] Verifying addon gcp-auth=true in "addons-246349"
	I1008 18:02:37.874839  289308 out.go:177] * Verifying gcp-auth addon...
	I1008 18:02:37.876958  289308 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1008 18:02:37.880644  289308 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1008 18:02:38.081951  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:38.086394  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:38.088093  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:38.588109  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:38.590226  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:38.591571  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:39.001299  289308 pod_ready.go:103] pod "coredns-7c65d6cfc9-vxnx7" in "kube-system" namespace has status "Ready":"False"
	I1008 18:02:39.084075  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:39.086659  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:39.088444  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:39.500588  289308 pod_ready.go:93] pod "coredns-7c65d6cfc9-vxnx7" in "kube-system" namespace has status "Ready":"True"
	I1008 18:02:39.500657  289308 pod_ready.go:82] duration metric: took 11.507975652s for pod "coredns-7c65d6cfc9-vxnx7" in "kube-system" namespace to be "Ready" ...
	I1008 18:02:39.500687  289308 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wn7rk" in "kube-system" namespace to be "Ready" ...
	I1008 18:02:39.503827  289308 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-wn7rk" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-wn7rk" not found
	I1008 18:02:39.503896  289308 pod_ready.go:82] duration metric: took 3.186282ms for pod "coredns-7c65d6cfc9-wn7rk" in "kube-system" namespace to be "Ready" ...
	E1008 18:02:39.503923  289308 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-wn7rk" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-wn7rk" not found
	I1008 18:02:39.503951  289308 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-246349" in "kube-system" namespace to be "Ready" ...
	I1008 18:02:39.510760  289308 pod_ready.go:93] pod "etcd-addons-246349" in "kube-system" namespace has status "Ready":"True"
	I1008 18:02:39.510852  289308 pod_ready.go:82] duration metric: took 6.875209ms for pod "etcd-addons-246349" in "kube-system" namespace to be "Ready" ...
	I1008 18:02:39.510885  289308 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-246349" in "kube-system" namespace to be "Ready" ...
	I1008 18:02:39.518175  289308 pod_ready.go:93] pod "kube-apiserver-addons-246349" in "kube-system" namespace has status "Ready":"True"
	I1008 18:02:39.518255  289308 pod_ready.go:82] duration metric: took 7.341828ms for pod "kube-apiserver-addons-246349" in "kube-system" namespace to be "Ready" ...
	I1008 18:02:39.518286  289308 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-246349" in "kube-system" namespace to be "Ready" ...
	I1008 18:02:39.525756  289308 pod_ready.go:93] pod "kube-controller-manager-addons-246349" in "kube-system" namespace has status "Ready":"True"
	I1008 18:02:39.525832  289308 pod_ready.go:82] duration metric: took 7.523404ms for pod "kube-controller-manager-addons-246349" in "kube-system" namespace to be "Ready" ...
	I1008 18:02:39.525859  289308 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pjcqn" in "kube-system" namespace to be "Ready" ...
	I1008 18:02:39.587923  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:39.589640  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:39.591311  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:39.696994  289308 pod_ready.go:93] pod "kube-proxy-pjcqn" in "kube-system" namespace has status "Ready":"True"
	I1008 18:02:39.697072  289308 pod_ready.go:82] duration metric: took 171.190843ms for pod "kube-proxy-pjcqn" in "kube-system" namespace to be "Ready" ...
	I1008 18:02:39.697100  289308 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-246349" in "kube-system" namespace to be "Ready" ...
	I1008 18:02:40.087101  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:40.089932  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:40.092050  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:40.097343  289308 pod_ready.go:93] pod "kube-scheduler-addons-246349" in "kube-system" namespace has status "Ready":"True"
	I1008 18:02:40.097420  289308 pod_ready.go:82] duration metric: took 400.297504ms for pod "kube-scheduler-addons-246349" in "kube-system" namespace to be "Ready" ...
	I1008 18:02:40.097452  289308 pod_ready.go:39] duration metric: took 12.119420424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 18:02:40.097482  289308 api_server.go:52] waiting for apiserver process to appear ...
	I1008 18:02:40.097565  289308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 18:02:40.117964  289308 api_server.go:72] duration metric: took 14.875001195s to wait for apiserver process to appear ...
	I1008 18:02:40.118000  289308 api_server.go:88] waiting for apiserver healthz status ...
	I1008 18:02:40.118041  289308 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1008 18:02:40.127775  289308 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1008 18:02:40.129055  289308 api_server.go:141] control plane version: v1.31.1
	I1008 18:02:40.129083  289308 api_server.go:131] duration metric: took 11.07557ms to wait for apiserver health ...
	I1008 18:02:40.129092  289308 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 18:02:40.305146  289308 system_pods.go:59] 18 kube-system pods found
	I1008 18:02:40.305232  289308 system_pods.go:61] "coredns-7c65d6cfc9-vxnx7" [c1e07fdc-33dc-435e-8e40-b069244eacdf] Running
	I1008 18:02:40.305258  289308 system_pods.go:61] "csi-hostpath-attacher-0" [c25c864d-62e5-4fb6-a29a-66844e47450e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 18:02:40.305288  289308 system_pods.go:61] "csi-hostpath-resizer-0" [6e8d3e30-cd5e-4a0e-942f-b1de57d6c2f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1008 18:02:40.305416  289308 system_pods.go:61] "csi-hostpathplugin-l5bvz" [18c1aa06-c0d9-4d44-883f-dae66d7ce26d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1008 18:02:40.305439  289308 system_pods.go:61] "etcd-addons-246349" [0e2aebbb-383f-4438-8999-b8a36478fbca] Running
	I1008 18:02:40.305458  289308 system_pods.go:61] "kindnet-xj6p9" [4aa3675d-fdc3-4086-b0a6-acb881b72a93] Running
	I1008 18:02:40.305480  289308 system_pods.go:61] "kube-apiserver-addons-246349" [d9447c16-3440-4a87-b58c-3bbadb85362b] Running
	I1008 18:02:40.305514  289308 system_pods.go:61] "kube-controller-manager-addons-246349" [0612d7b3-5fc9-41b8-9e67-9dd8d7fb4035] Running
	I1008 18:02:40.305540  289308 system_pods.go:61] "kube-ingress-dns-minikube" [9252a2f3-dbf3-4e58-a28e-ea4af078c472] Running
	I1008 18:02:40.305561  289308 system_pods.go:61] "kube-proxy-pjcqn" [58e34b16-87f2-4137-9806-e0bb53cda95f] Running
	I1008 18:02:40.305586  289308 system_pods.go:61] "kube-scheduler-addons-246349" [10bdb84b-19ac-45d9-8387-85d41da96479] Running
	I1008 18:02:40.305611  289308 system_pods.go:61] "metrics-server-84c5f94fbc-4g8nz" [b47dc422-e583-458b-a57a-f97fb1c1ea0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 18:02:40.305634  289308 system_pods.go:61] "nvidia-device-plugin-daemonset-5d4vx" [dafed154-2336-4889-8370-c2b31d4fc071] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1008 18:02:40.305660  289308 system_pods.go:61] "registry-66c9cd494c-8tr5n" [0ecafdb8-54b7-4fd2-a93c-946dbacc3308] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 18:02:40.305892  289308 system_pods.go:61] "registry-proxy-827n9" [5050ac4c-9bae-47a6-9b15-3fd5cae17f26] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 18:02:40.305928  289308 system_pods.go:61] "snapshot-controller-56fcc65765-8d9jf" [fcb13f15-bbd9-4771-ab2f-c874fe39749b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 18:02:40.305952  289308 system_pods.go:61] "snapshot-controller-56fcc65765-mrrwj" [7c68d0d9-5902-4818-87ed-4a154c4cfafd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 18:02:40.305974  289308 system_pods.go:61] "storage-provisioner" [3a74ef82-6a22-47bf-bfad-22f738d724d6] Running
	I1008 18:02:40.305999  289308 system_pods.go:74] duration metric: took 176.899138ms to wait for pod list to return data ...
	I1008 18:02:40.306022  289308 default_sa.go:34] waiting for default service account to be created ...
	I1008 18:02:40.496472  289308 default_sa.go:45] found service account: "default"
	I1008 18:02:40.496548  289308 default_sa.go:55] duration metric: took 190.503204ms for default service account to be created ...
	I1008 18:02:40.496573  289308 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 18:02:40.586247  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:40.586927  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:40.587840  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:40.703414  289308 system_pods.go:86] 18 kube-system pods found
	I1008 18:02:40.703518  289308 system_pods.go:89] "coredns-7c65d6cfc9-vxnx7" [c1e07fdc-33dc-435e-8e40-b069244eacdf] Running
	I1008 18:02:40.703545  289308 system_pods.go:89] "csi-hostpath-attacher-0" [c25c864d-62e5-4fb6-a29a-66844e47450e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 18:02:40.703589  289308 system_pods.go:89] "csi-hostpath-resizer-0" [6e8d3e30-cd5e-4a0e-942f-b1de57d6c2f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1008 18:02:40.703624  289308 system_pods.go:89] "csi-hostpathplugin-l5bvz" [18c1aa06-c0d9-4d44-883f-dae66d7ce26d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1008 18:02:40.703648  289308 system_pods.go:89] "etcd-addons-246349" [0e2aebbb-383f-4438-8999-b8a36478fbca] Running
	I1008 18:02:40.703676  289308 system_pods.go:89] "kindnet-xj6p9" [4aa3675d-fdc3-4086-b0a6-acb881b72a93] Running
	I1008 18:02:40.703710  289308 system_pods.go:89] "kube-apiserver-addons-246349" [d9447c16-3440-4a87-b58c-3bbadb85362b] Running
	I1008 18:02:40.703743  289308 system_pods.go:89] "kube-controller-manager-addons-246349" [0612d7b3-5fc9-41b8-9e67-9dd8d7fb4035] Running
	I1008 18:02:40.703766  289308 system_pods.go:89] "kube-ingress-dns-minikube" [9252a2f3-dbf3-4e58-a28e-ea4af078c472] Running
	I1008 18:02:40.703791  289308 system_pods.go:89] "kube-proxy-pjcqn" [58e34b16-87f2-4137-9806-e0bb53cda95f] Running
	I1008 18:02:40.703824  289308 system_pods.go:89] "kube-scheduler-addons-246349" [10bdb84b-19ac-45d9-8387-85d41da96479] Running
	I1008 18:02:40.703859  289308 system_pods.go:89] "metrics-server-84c5f94fbc-4g8nz" [b47dc422-e583-458b-a57a-f97fb1c1ea0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 18:02:40.703884  289308 system_pods.go:89] "nvidia-device-plugin-daemonset-5d4vx" [dafed154-2336-4889-8370-c2b31d4fc071] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1008 18:02:40.703911  289308 system_pods.go:89] "registry-66c9cd494c-8tr5n" [0ecafdb8-54b7-4fd2-a93c-946dbacc3308] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 18:02:40.703945  289308 system_pods.go:89] "registry-proxy-827n9" [5050ac4c-9bae-47a6-9b15-3fd5cae17f26] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 18:02:40.703975  289308 system_pods.go:89] "snapshot-controller-56fcc65765-8d9jf" [fcb13f15-bbd9-4771-ab2f-c874fe39749b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 18:02:40.704001  289308 system_pods.go:89] "snapshot-controller-56fcc65765-mrrwj" [7c68d0d9-5902-4818-87ed-4a154c4cfafd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 18:02:40.704023  289308 system_pods.go:89] "storage-provisioner" [3a74ef82-6a22-47bf-bfad-22f738d724d6] Running
	I1008 18:02:40.704062  289308 system_pods.go:126] duration metric: took 207.459055ms to wait for k8s-apps to be running ...
	I1008 18:02:40.704093  289308 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 18:02:40.704186  289308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:02:40.717770  289308 system_svc.go:56] duration metric: took 13.652449ms WaitForService to wait for kubelet
	I1008 18:02:40.717800  289308 kubeadm.go:582] duration metric: took 15.474842467s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:02:40.717819  289308 node_conditions.go:102] verifying NodePressure condition ...
	I1008 18:02:40.896968  289308 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 18:02:40.897006  289308 node_conditions.go:123] node cpu capacity is 2
	I1008 18:02:40.897025  289308 node_conditions.go:105] duration metric: took 179.200113ms to run NodePressure ...
	I1008 18:02:40.897039  289308 start.go:241] waiting for startup goroutines ...
	I1008 18:02:41.086098  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:41.086983  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:41.088737  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:41.583876  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:41.587568  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:41.589589  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:42.086674  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:42.088137  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:42.090774  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:42.583338  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:42.585033  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:42.586552  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:43.088634  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:43.089584  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:43.090860  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:43.585086  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:43.591513  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:43.594234  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:44.081112  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:44.083448  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:44.086259  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:44.587682  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:44.589560  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:44.590586  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:45.089657  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:45.091319  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:45.093742  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:45.582474  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:45.587684  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:45.589764  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:46.082765  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:46.085569  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:46.086015  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:46.584126  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:46.587118  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:46.590316  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:47.087844  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:47.089398  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:47.090605  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:47.584014  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:47.586544  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:47.589058  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:48.081460  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:48.085597  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:48.086486  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:48.581153  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:48.584262  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:48.585004  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:49.084897  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:49.086141  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:49.087470  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:49.582040  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:49.584691  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:49.584803  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:50.085792  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:50.086748  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:50.088154  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:50.584782  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:50.586106  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:50.586236  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:51.086729  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:51.087943  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:51.089607  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:51.585181  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:51.586159  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:51.587729  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:52.082132  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:52.084202  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:52.086512  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:52.581390  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:52.584675  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:52.585911  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:53.082446  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:53.083857  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:53.085237  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:53.584420  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:53.584838  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:53.586916  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:54.083620  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:54.085078  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:54.087183  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:54.594447  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:54.595108  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:54.596038  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:55.085467  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:55.086654  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:55.089498  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:55.585742  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:55.586819  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:55.587477  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:56.086428  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:56.088050  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:56.089908  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:56.581836  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:56.584600  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:56.586187  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:57.083751  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:57.084468  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:57.085833  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:57.586579  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:57.588398  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:57.589854  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:58.087371  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:58.089275  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:58.091229  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:58.590987  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:58.591381  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:58.592797  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:59.088635  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:59.090307  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:02:59.091475  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:59.583695  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:02:59.584786  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:02:59.585941  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:00.099280  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:03:00.101022  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:00.103069  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:00.584481  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:03:00.585579  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:00.586234  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:01.081631  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:01.084376  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:03:01.085616  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:01.585241  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:01.587169  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:03:01.588429  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:02.081507  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:02.084786  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:02.085571  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:03:02.585313  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:03:02.586913  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:02.592334  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:03.083196  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:03.085610  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 18:03:03.087922  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:03.589129  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:03.590682  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:03.594750  289308 kapi.go:107] duration metric: took 27.515207041s to wait for kubernetes.io/minikube-addons=registry ...
	I1008 18:03:04.082354  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:04.084679  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:04.584987  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:04.586669  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:05.081951  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:05.086231  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:05.587648  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:05.589407  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:06.083756  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:06.084291  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:06.582580  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:06.585250  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:07.081421  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:07.085211  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:07.587840  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:07.590030  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:08.081895  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:08.084144  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:08.584382  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:08.585170  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:09.082120  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:09.084749  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:09.587013  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:09.589037  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:10.084794  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:10.086301  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:10.583420  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:10.585352  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:11.080829  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:11.084030  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:11.582527  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:11.585402  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:12.082179  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:12.084411  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:12.582480  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:12.585135  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:13.081783  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:13.084231  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:13.582626  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:13.583837  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:14.081528  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:14.084425  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:14.582387  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:14.585573  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:15.083441  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:15.085141  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:15.584379  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:15.585165  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:16.083509  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:16.085260  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:16.582205  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:16.585651  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:17.082167  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:17.083858  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:17.619448  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:17.620816  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:18.086518  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:18.086700  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:18.581215  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:18.584257  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:19.083440  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:19.084817  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:19.588262  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:19.589248  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:20.081502  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:20.085030  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:20.585384  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:20.587776  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:21.082162  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:21.083906  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:21.584043  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:21.584470  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:22.081812  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:22.085233  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:22.582013  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:22.583737  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:23.083187  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:23.084508  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:23.581423  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:23.584227  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:24.083644  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:24.085280  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 18:03:24.589794  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:24.591420  289308 kapi.go:107] duration metric: took 48.512891684s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1008 18:03:25.081278  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:25.581852  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:26.081333  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:26.580855  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:27.081823  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:27.582176  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:28.080609  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:28.581290  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:29.080843  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:29.581540  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:30.083549  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:30.581944  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:31.080868  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:31.580480  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:32.080553  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:32.581027  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:33.080741  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:33.581496  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:34.081322  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:34.581249  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:35.080901  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:35.581517  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:36.081235  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:36.581100  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:37.081601  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:37.586182  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:38.081729  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:38.581256  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:39.081756  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:39.581091  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:40.082622  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:40.587040  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:41.082359  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:41.582475  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:42.082043  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:42.580834  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:43.081803  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:43.581574  289308 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 18:03:44.084053  289308 kapi.go:107] duration metric: took 1m8.007809843s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1008 18:04:00.882190  289308 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1008 18:04:00.882218  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 18:04:01.380749  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 18:04:01.880825  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 18:04:02.381271  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 18:04:02.880475  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 18:04:03.380867  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 18:04:03.881280  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 18:04:04.382155  289308 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 18:04:04.881498  289308 kapi.go:107] duration metric: took 1m27.004537794s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1008 18:04:04.883187  289308 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-246349 cluster.
	I1008 18:04:04.888309  289308 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1008 18:04:04.889847  289308 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1008 18:04:04.891678  289308 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, volcano, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1008 18:04:04.893279  289308 addons.go:510] duration metric: took 1m39.649951132s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner volcano metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1008 18:04:04.893324  289308 start.go:246] waiting for cluster config update ...
	I1008 18:04:04.893347  289308 start.go:255] writing updated cluster config ...
	I1008 18:04:04.893647  289308 ssh_runner.go:195] Run: rm -f paused
	I1008 18:04:05.282068  289308 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 18:04:05.284561  289308 out.go:177] * Done! kubectl is now configured to use "addons-246349" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	980521b588716       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   8c7c7b3f98bc1       gcp-auth-89d5ffd79-bwm7k
	8749228d9fe10       289a818c8d9c5       3 minutes ago       Running             controller                               0                   1bc344558ca0e       ingress-nginx-controller-bc57996ff-rx4wd
	bcf4603039a16       ee6d597e62dc8       3 minutes ago       Running             csi-snapshotter                          0                   a95fcb2cd959c       csi-hostpathplugin-l5bvz
	6608913f541df       642ded511e141       4 minutes ago       Running             csi-provisioner                          0                   a95fcb2cd959c       csi-hostpathplugin-l5bvz
	69702d1e60181       922312104da8a       4 minutes ago       Running             liveness-probe                           0                   a95fcb2cd959c       csi-hostpathplugin-l5bvz
	604a87afc74d9       08f6b2990811a       4 minutes ago       Running             hostpath                                 0                   a95fcb2cd959c       csi-hostpathplugin-l5bvz
	683b562990719       0107d56dbc0be       4 minutes ago       Running             node-driver-registrar                    0                   a95fcb2cd959c       csi-hostpathplugin-l5bvz
	41870c1927975       1a9605c872c1d       4 minutes ago       Running             admission                                0                   874fcd0594ac7       volcano-admission-5874dfdd79-hpn22
	d1a47fea008ea       6aa88c604f2b4       4 minutes ago       Running             volcano-scheduler                        0                   95777b431ae0e       volcano-scheduler-6c9778cbdf-65r4d
	e2cecd581940c       9a80d518f102c       4 minutes ago       Running             csi-attacher                             0                   2cc3adcd6fd41       csi-hostpath-attacher-0
	6428cade97dde       487fa743e1e22       4 minutes ago       Running             csi-resizer                              0                   495a3424e411f       csi-hostpath-resizer-0
	5b5bc1dd22a92       1461903ec4fe9       4 minutes ago       Running             csi-external-health-monitor-controller   0                   a95fcb2cd959c       csi-hostpathplugin-l5bvz
	29801a6f7ef4a       23cbb28ae641a       4 minutes ago       Running             volcano-controllers                      0                   f365ab3fde634       volcano-controllers-789ffc5785-qbzrm
	91d0ecd4b69c5       420193b27261a       4 minutes ago       Exited              patch                                    0                   d8a2222c1ce83       ingress-nginx-admission-patch-nm6hq
	a181e62db6600       420193b27261a       4 minutes ago       Exited              create                                   0                   dce95640fb69c       ingress-nginx-admission-create-f8ktk
	dcae22117cba6       7ce2150c8929b       4 minutes ago       Running             local-path-provisioner                   0                   81cc3e0d2676a       local-path-provisioner-86d989889c-wkjcc
	1df1c52b7b5da       f7ed138f698f6       4 minutes ago       Running             registry-proxy                           0                   fc1a802e25193       registry-proxy-827n9
	7aa2fb39c4b18       4d1e5c3e97420       4 minutes ago       Running             volume-snapshot-controller               0                   515496cb3ec2e       snapshot-controller-56fcc65765-mrrwj
	ed7a2c782a48e       4d1e5c3e97420       4 minutes ago       Running             volume-snapshot-controller               0                   e6f554b38e9f9       snapshot-controller-56fcc65765-8d9jf
	b055f8b51c26d       5548a49bb60ba       4 minutes ago       Running             metrics-server                           0                   774f7f31fb739       metrics-server-84c5f94fbc-4g8nz
	b80ed2be5cfc8       77bdba588b953       4 minutes ago       Running             yakd                                     0                   2fdad532661ad       yakd-dashboard-67d98fc6b-8ztv6
	76ab47c2184b6       c9cf76bb104e1       4 minutes ago       Running             registry                                 0                   131a74c775757       registry-66c9cd494c-8tr5n
	2c6544a6f9b23       be9cac3585579       4 minutes ago       Running             cloud-spanner-emulator                   0                   d8d0d21134de7       cloud-spanner-emulator-5b584cc74-b4d47
	13ac29a0e4d85       a9bac31a5be8d       4 minutes ago       Running             nvidia-device-plugin-ctr                 0                   3b4770a1bce87       nvidia-device-plugin-daemonset-5d4vx
	04ec1433e8816       68de1ddeaded8       4 minutes ago       Running             gadget                                   0                   c1378e829b327       gadget-nff5l
	97c83e1876804       35508c2f890c4       4 minutes ago       Running             minikube-ingress-dns                     0                   f6e5f4a604687       kube-ingress-dns-minikube
	6c2b94ff7a984       2f6c962e7b831       4 minutes ago       Running             coredns                                  0                   791dfc7f03e80       coredns-7c65d6cfc9-vxnx7
	7691651a94691       ba04bb24b9575       4 minutes ago       Running             storage-provisioner                      0                   f70e654436b88       storage-provisioner
	a0ee92e9cb26f       6a23fa8fd2b78       4 minutes ago       Running             kindnet-cni                              0                   391ff04a49caf       kindnet-xj6p9
	05eb020f1ea0a       24a140c548c07       4 minutes ago       Running             kube-proxy                               0                   b69a32ea58af3       kube-proxy-pjcqn
	7bf20a531418a       7f8aa378bb47d       5 minutes ago       Running             kube-scheduler                           0                   3246de140011d       kube-scheduler-addons-246349
	51513c8a85f77       27e3830e14027       5 minutes ago       Running             etcd                                     0                   57857b77c4fab       etcd-addons-246349
	931e105ba9202       d3f53a98c0a9d       5 minutes ago       Running             kube-apiserver                           0                   34aea09aab90c       kube-apiserver-addons-246349
	84783b5587d4d       279f381cb3736       5 minutes ago       Running             kube-controller-manager                  0                   9a9652d2200d9       kube-controller-manager-addons-246349
	
	
	==> containerd <==
	Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.088107393Z" level=info msg="TearDown network for sandbox \"b7992de2266a84aabb6ae18ef8e2d29a1cdc10237a7800b9af536693cffd5a8f\" successfully"
	Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.088147489Z" level=info msg="StopPodSandbox for \"b7992de2266a84aabb6ae18ef8e2d29a1cdc10237a7800b9af536693cffd5a8f\" returns successfully"
	Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.088842242Z" level=info msg="RemovePodSandbox for \"b7992de2266a84aabb6ae18ef8e2d29a1cdc10237a7800b9af536693cffd5a8f\""
	Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.088889739Z" level=info msg="Forcibly stopping sandbox \"b7992de2266a84aabb6ae18ef8e2d29a1cdc10237a7800b9af536693cffd5a8f\""
	Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.102918196Z" level=info msg="TearDown network for sandbox \"b7992de2266a84aabb6ae18ef8e2d29a1cdc10237a7800b9af536693cffd5a8f\" successfully"
	Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.109304413Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7992de2266a84aabb6ae18ef8e2d29a1cdc10237a7800b9af536693cffd5a8f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.109460047Z" level=info msg="RemovePodSandbox \"b7992de2266a84aabb6ae18ef8e2d29a1cdc10237a7800b9af536693cffd5a8f\" returns successfully"
	Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.110662848Z" level=info msg="StopPodSandbox for \"fea6f165bafcf5bf61e8630bc29f6222449f81f1a4047752d8ac31a3927d9c61\""
	Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.118847744Z" level=info msg="TearDown network for sandbox \"fea6f165bafcf5bf61e8630bc29f6222449f81f1a4047752d8ac31a3927d9c61\" successfully"
	Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.118888931Z" level=info msg="StopPodSandbox for \"fea6f165bafcf5bf61e8630bc29f6222449f81f1a4047752d8ac31a3927d9c61\" returns successfully"
	Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.119563485Z" level=info msg="RemovePodSandbox for \"fea6f165bafcf5bf61e8630bc29f6222449f81f1a4047752d8ac31a3927d9c61\""
	Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.119598986Z" level=info msg="Forcibly stopping sandbox \"fea6f165bafcf5bf61e8630bc29f6222449f81f1a4047752d8ac31a3927d9c61\""
	Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.127420252Z" level=info msg="TearDown network for sandbox \"fea6f165bafcf5bf61e8630bc29f6222449f81f1a4047752d8ac31a3927d9c61\" successfully"
	Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.134002289Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fea6f165bafcf5bf61e8630bc29f6222449f81f1a4047752d8ac31a3927d9c61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 08 18:04:20 addons-246349 containerd[817]: time="2024-10-08T18:04:20.134181873Z" level=info msg="RemovePodSandbox \"fea6f165bafcf5bf61e8630bc29f6222449f81f1a4047752d8ac31a3927d9c61\" returns successfully"
	Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.138713671Z" level=info msg="RemoveContainer for \"5803c163e00cf0eee0bd350a8b9db4f15ec0256048cdf96f0db0def1f72dea5a\""
	Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.145524890Z" level=info msg="RemoveContainer for \"5803c163e00cf0eee0bd350a8b9db4f15ec0256048cdf96f0db0def1f72dea5a\" returns successfully"
	Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.147512145Z" level=info msg="StopPodSandbox for \"82faddd4be9818c16208c8eb816bc8094e3498867a076b9a8b25ad806c17046b\""
	Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.155200970Z" level=info msg="TearDown network for sandbox \"82faddd4be9818c16208c8eb816bc8094e3498867a076b9a8b25ad806c17046b\" successfully"
	Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.155383031Z" level=info msg="StopPodSandbox for \"82faddd4be9818c16208c8eb816bc8094e3498867a076b9a8b25ad806c17046b\" returns successfully"
	Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.156105617Z" level=info msg="RemovePodSandbox for \"82faddd4be9818c16208c8eb816bc8094e3498867a076b9a8b25ad806c17046b\""
	Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.156147748Z" level=info msg="Forcibly stopping sandbox \"82faddd4be9818c16208c8eb816bc8094e3498867a076b9a8b25ad806c17046b\""
	Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.164966291Z" level=info msg="TearDown network for sandbox \"82faddd4be9818c16208c8eb816bc8094e3498867a076b9a8b25ad806c17046b\" successfully"
	Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.172749626Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82faddd4be9818c16208c8eb816bc8094e3498867a076b9a8b25ad806c17046b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 08 18:05:20 addons-246349 containerd[817]: time="2024-10-08T18:05:20.172906712Z" level=info msg="RemovePodSandbox \"82faddd4be9818c16208c8eb816bc8094e3498867a076b9a8b25ad806c17046b\" returns successfully"
	
	
	==> coredns [6c2b94ff7a984f8d04d8b498ee95608c149d7140ff36d16f624705cc2eb30d11] <==
	[INFO] 10.244.0.10:50276 - 33321 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000076722s
	[INFO] 10.244.0.10:50276 - 18308 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001768845s
	[INFO] 10.244.0.10:50276 - 65302 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002485845s
	[INFO] 10.244.0.10:50276 - 27937 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000187376s
	[INFO] 10.244.0.10:50276 - 56698 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000070567s
	[INFO] 10.244.0.10:40412 - 31062 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000102032s
	[INFO] 10.244.0.10:40412 - 30832 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000036388s
	[INFO] 10.244.0.10:48233 - 25595 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000086764s
	[INFO] 10.244.0.10:48233 - 25150 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000032392s
	[INFO] 10.244.0.10:35157 - 23956 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048834s
	[INFO] 10.244.0.10:35157 - 23767 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000056177s
	[INFO] 10.244.0.10:50403 - 11654 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00139246s
	[INFO] 10.244.0.10:50403 - 11843 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002251989s
	[INFO] 10.244.0.10:35526 - 48709 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077393s
	[INFO] 10.244.0.10:35526 - 49129 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000050934s
	[INFO] 10.244.0.24:35068 - 45868 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000195114s
	[INFO] 10.244.0.24:47573 - 40190 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000081824s
	[INFO] 10.244.0.24:36383 - 47688 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000083424s
	[INFO] 10.244.0.24:40112 - 20652 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117818s
	[INFO] 10.244.0.24:41941 - 5454 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000203089s
	[INFO] 10.244.0.24:48138 - 10559 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117236s
	[INFO] 10.244.0.24:59564 - 6573 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002234458s
	[INFO] 10.244.0.24:50982 - 5978 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002146751s
	[INFO] 10.244.0.24:43387 - 60009 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001591699s
	[INFO] 10.244.0.24:44938 - 62531 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001659117s
	
	
	==> describe nodes <==
	Name:               addons-246349
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-246349
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=addons-246349
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_08T18_02_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-246349
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-246349"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 18:02:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-246349
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:07:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 18:04:22 +0000   Tue, 08 Oct 2024 18:02:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 18:04:22 +0000   Tue, 08 Oct 2024 18:02:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 18:04:22 +0000   Tue, 08 Oct 2024 18:02:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 18:04:22 +0000   Tue, 08 Oct 2024 18:02:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-246349
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cbf405d2c0144d0694aae0a7fa67238d
	  System UUID:                720ab22a-5498-4c8a-9cc4-cacf12496aa0
	  Boot ID:                    b951cf46-640a-45c2-9395-0fcf341c803c
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-b4d47      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  gadget                      gadget-nff5l                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  gcp-auth                    gcp-auth-89d5ffd79-bwm7k                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m24s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-rx4wd    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m51s
	  kube-system                 coredns-7c65d6cfc9-vxnx7                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m59s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 csi-hostpathplugin-l5bvz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 etcd-addons-246349                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m4s
	  kube-system                 kindnet-xj6p9                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m59s
	  kube-system                 kube-apiserver-addons-246349                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-controller-manager-addons-246349       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-proxy-pjcqn                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-scheduler-addons-246349                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 metrics-server-84c5f94fbc-4g8nz             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m53s
	  kube-system                 nvidia-device-plugin-daemonset-5d4vx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 registry-66c9cd494c-8tr5n                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 registry-proxy-827n9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 snapshot-controller-56fcc65765-8d9jf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 snapshot-controller-56fcc65765-mrrwj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  local-path-storage          local-path-provisioner-86d989889c-wkjcc     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  volcano-system              volcano-admission-5874dfdd79-hpn22          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  volcano-system              volcano-controllers-789ffc5785-qbzrm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  volcano-system              volcano-scheduler-6c9778cbdf-65r4d          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-8ztv6              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m57s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  5m11s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 5m11s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m11s (x8 over 5m11s)  kubelet          Node addons-246349 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m11s (x7 over 5m11s)  kubelet          Node addons-246349 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m11s (x7 over 5m11s)  kubelet          Node addons-246349 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m11s                  kubelet          Starting kubelet.
	  Normal   Starting                 5m5s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m5s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  5m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5m4s                   kubelet          Node addons-246349 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m4s                   kubelet          Node addons-246349 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m4s                   kubelet          Node addons-246349 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m                     node-controller  Node addons-246349 event: Registered Node addons-246349 in Controller
	
	
	==> dmesg <==
	[Oct 8 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015697] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.471811] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.053322] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.014987] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.650369] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.406873] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 8 16:57] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 8 17:31] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [51513c8a85f774b4c758f44d804304074d74a4df6b212641ab98897b1cc8d08c] <==
	{"level":"info","ts":"2024-10-08T18:02:14.131075Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-08T18:02:14.122622Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-08T18:02:14.131178Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-08T18:02:14.130968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-10-08T18:02:14.131369Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-10-08T18:02:14.747438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-08T18:02:14.747667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-08T18:02:14.747793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-10-08T18:02:14.747907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-10-08T18:02:14.747987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-08T18:02:14.748117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-10-08T18:02:14.748192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-08T18:02:14.751615Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-08T18:02:14.753072Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-246349 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-08T18:02:14.753336Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-08T18:02:14.753805Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-08T18:02:14.754129Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-08T18:02:14.754224Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-08T18:02:14.754929Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-08T18:02:14.762679Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-08T18:02:14.757607Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-08T18:02:14.757649Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-08T18:02:14.789900Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-08T18:02:14.790035Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-08T18:02:14.794162Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [980521b58871663597d7aa280874de03bd381fd57a5c0c9411d8b78c1425c3a3] <==
	2024/10/08 18:04:03 GCP Auth Webhook started!
	2024/10/08 18:04:21 Ready to marshal response ...
	2024/10/08 18:04:21 Ready to write response ...
	2024/10/08 18:04:22 Ready to marshal response ...
	2024/10/08 18:04:22 Ready to write response ...
	
	
	==> kernel <==
	 18:07:24 up  1:49,  0 users,  load average: 0.56, 1.32, 1.94
	Linux addons-246349 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [a0ee92e9cb26fec74d7686d78194d24e054266eef9cd829964d4e76a5ca41393] <==
	I1008 18:05:16.911067       1 main.go:299] handling current node
	I1008 18:05:26.910646       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1008 18:05:26.910678       1 main.go:299] handling current node
	I1008 18:05:36.916220       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1008 18:05:36.916257       1 main.go:299] handling current node
	I1008 18:05:46.914999       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1008 18:05:46.915034       1 main.go:299] handling current node
	I1008 18:05:56.915129       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1008 18:05:56.915163       1 main.go:299] handling current node
	I1008 18:06:06.913116       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1008 18:06:06.913154       1 main.go:299] handling current node
	I1008 18:06:16.913724       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1008 18:06:16.913761       1 main.go:299] handling current node
	I1008 18:06:26.911012       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1008 18:06:26.911046       1 main.go:299] handling current node
	I1008 18:06:36.913983       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1008 18:06:36.914019       1 main.go:299] handling current node
	I1008 18:06:46.919745       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1008 18:06:46.919786       1 main.go:299] handling current node
	I1008 18:06:56.910606       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1008 18:06:56.910642       1 main.go:299] handling current node
	I1008 18:07:06.916607       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1008 18:07:06.916640       1 main.go:299] handling current node
	I1008 18:07:16.917744       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1008 18:07:16.917776       1 main.go:299] handling current node
	
	
	==> kube-apiserver [931e105ba9202a3c6933ff2e79e14d2fc2b27a2c0ad75e0a0e1f4b5fde19be28] <==
	I1008 18:03:00.569493       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1008 18:03:08.549855       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.133.74:443: connect: connection refused
	E1008 18:03:08.549896       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.133.74:443: connect: connection refused" logger="UnhandledError"
	W1008 18:03:08.551699       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
	W1008 18:03:08.624681       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.133.74:443: connect: connection refused
	E1008 18:03:08.624721       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.133.74:443: connect: connection refused" logger="UnhandledError"
	W1008 18:03:08.628509       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
	W1008 18:03:16.783295       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
	W1008 18:03:17.829990       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
	W1008 18:03:18.880333       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
	W1008 18:03:19.629353       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.133.74:443: connect: connection refused
	E1008 18:03:19.629393       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.133.74:443: connect: connection refused" logger="UnhandledError"
	W1008 18:03:19.631281       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
	W1008 18:03:19.931644       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
	W1008 18:03:20.952917       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
	W1008 18:03:22.048375       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
	W1008 18:03:23.123392       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.45.56:443: connect: connection refused
	W1008 18:03:40.562335       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.133.74:443: connect: connection refused
	E1008 18:03:40.562370       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.133.74:443: connect: connection refused" logger="UnhandledError"
	W1008 18:03:40.640146       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.133.74:443: connect: connection refused
	E1008 18:03:40.640184       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.133.74:443: connect: connection refused" logger="UnhandledError"
	W1008 18:04:00.595819       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.133.74:443: connect: connection refused
	E1008 18:04:00.595861       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.133.74:443: connect: connection refused" logger="UnhandledError"
	I1008 18:04:21.839096       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I1008 18:04:21.901946       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [84783b5587d4d50f5c1f80f9531cce51c0d0b992e6f9a5d70c1c3c4530f38fa9] <==
	I1008 18:03:42.677965       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1008 18:03:42.937960       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1008 18:03:43.681708       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="80.84µs"
	I1008 18:03:43.896889       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1008 18:03:43.944904       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1008 18:03:43.958416       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1008 18:03:43.965487       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1008 18:03:44.903264       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1008 18:03:44.911301       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1008 18:03:44.917601       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1008 18:03:51.763846       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-246349"
	I1008 18:03:57.926637       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="13.592096ms"
	I1008 18:03:57.928564       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="77.362µs"
	I1008 18:04:00.613316       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="19.951748ms"
	I1008 18:04:00.644837       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="31.469199ms"
	I1008 18:04:00.645073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="188.493µs"
	I1008 18:04:00.657559       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="65.932µs"
	I1008 18:04:04.754020       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="13.726429ms"
	I1008 18:04:04.754861       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="30.726µs"
	I1008 18:04:13.018116       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1008 18:04:13.056585       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1008 18:04:14.009566       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1008 18:04:14.038807       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1008 18:04:21.537561       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I1008 18:04:22.149737       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-246349"
	
	
	==> kube-proxy [05eb020f1ea0a837a01cd0b3c03976b8f6e26076bd75e79a502cb8361daa06c8] <==
	I1008 18:02:26.551511       1 server_linux.go:66] "Using iptables proxy"
	I1008 18:02:26.657733       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1008 18:02:26.657819       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 18:02:26.700843       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 18:02:26.700911       1 server_linux.go:169] "Using iptables Proxier"
	I1008 18:02:26.705351       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 18:02:26.705930       1 server.go:483] "Version info" version="v1.31.1"
	I1008 18:02:26.705947       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 18:02:26.707886       1 config.go:199] "Starting service config controller"
	I1008 18:02:26.707908       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 18:02:26.707926       1 config.go:105] "Starting endpoint slice config controller"
	I1008 18:02:26.707930       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 18:02:26.708325       1 config.go:328] "Starting node config controller"
	I1008 18:02:26.708332       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 18:02:26.808500       1 shared_informer.go:320] Caches are synced for node config
	I1008 18:02:26.808511       1 shared_informer.go:320] Caches are synced for service config
	I1008 18:02:26.808535       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7bf20a531418a403928d11a062dc813cc7f3428d4c13cb3bc97ffb2cbfb60f72] <==
	W1008 18:02:17.691850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1008 18:02:17.692293       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:02:18.520247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1008 18:02:18.520303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:02:18.521368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1008 18:02:18.521580       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1008 18:02:18.548474       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1008 18:02:18.548736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 18:02:18.550177       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1008 18:02:18.550210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1008 18:02:18.550457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1008 18:02:18.550520       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:02:18.554626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1008 18:02:18.554665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 18:02:18.577449       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1008 18:02:18.577723       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:02:18.589789       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1008 18:02:18.590048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:02:18.667706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1008 18:02:18.667941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1008 18:02:18.821734       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1008 18:02:18.821959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:02:18.923728       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1008 18:02:18.923771       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1008 18:02:20.676165       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 08 18:03:44 addons-246349 kubelet[1500]: I1008 18:03:44.666809    1500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7992de2266a84aabb6ae18ef8e2d29a1cdc10237a7800b9af536693cffd5a8f"
	Oct 08 18:03:54 addons-246349 kubelet[1500]: I1008 18:03:54.986749    1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5d4vx" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 18:04:00 addons-246349 kubelet[1500]: E1008 18:04:00.622231    1500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d01453aa-f430-491d-8ae4-e6894b954954" containerName="patch"
	Oct 08 18:04:00 addons-246349 kubelet[1500]: E1008 18:04:00.622312    1500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c9137c7-0886-4c7b-9d0e-c3005aa0d173" containerName="create"
	Oct 08 18:04:00 addons-246349 kubelet[1500]: I1008 18:04:00.622368    1500 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c9137c7-0886-4c7b-9d0e-c3005aa0d173" containerName="create"
	Oct 08 18:04:00 addons-246349 kubelet[1500]: I1008 18:04:00.622378    1500 memory_manager.go:354] "RemoveStaleState removing state" podUID="d01453aa-f430-491d-8ae4-e6894b954954" containerName="patch"
	Oct 08 18:04:00 addons-246349 kubelet[1500]: I1008 18:04:00.689118    1500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4f215c74-d0c2-4dac-84bf-d2ccdc093974-gcp-creds\") pod \"gcp-auth-89d5ffd79-bwm7k\" (UID: \"4f215c74-d0c2-4dac-84bf-d2ccdc093974\") " pod="gcp-auth/gcp-auth-89d5ffd79-bwm7k"
	Oct 08 18:04:00 addons-246349 kubelet[1500]: I1008 18:04:00.689179    1500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f215c74-d0c2-4dac-84bf-d2ccdc093974-webhook-certs\") pod \"gcp-auth-89d5ffd79-bwm7k\" (UID: \"4f215c74-d0c2-4dac-84bf-d2ccdc093974\") " pod="gcp-auth/gcp-auth-89d5ffd79-bwm7k"
	Oct 08 18:04:00 addons-246349 kubelet[1500]: I1008 18:04:00.689209    1500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn9gw\" (UniqueName: \"kubernetes.io/projected/4f215c74-d0c2-4dac-84bf-d2ccdc093974-kube-api-access-bn9gw\") pod \"gcp-auth-89d5ffd79-bwm7k\" (UID: \"4f215c74-d0c2-4dac-84bf-d2ccdc093974\") " pod="gcp-auth/gcp-auth-89d5ffd79-bwm7k"
	Oct 08 18:04:00 addons-246349 kubelet[1500]: I1008 18:04:00.689244    1500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-project\" (UniqueName: \"kubernetes.io/host-path/4f215c74-d0c2-4dac-84bf-d2ccdc093974-gcp-project\") pod \"gcp-auth-89d5ffd79-bwm7k\" (UID: \"4f215c74-d0c2-4dac-84bf-d2ccdc093974\") " pod="gcp-auth/gcp-auth-89d5ffd79-bwm7k"
	Oct 08 18:04:00 addons-246349 kubelet[1500]: I1008 18:04:00.986627    1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-8tr5n" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 18:04:13 addons-246349 kubelet[1500]: I1008 18:04:13.035289    1500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-89d5ffd79-bwm7k" podStartSLOduration=10.205053457 podStartE2EDuration="13.035268123s" podCreationTimestamp="2024-10-08 18:04:00 +0000 UTC" firstStartedPulling="2024-10-08 18:04:01.031769041 +0000 UTC m=+101.150799063" lastFinishedPulling="2024-10-08 18:04:03.861983707 +0000 UTC m=+103.981013729" observedRunningTime="2024-10-08 18:04:04.744821384 +0000 UTC m=+104.863851397" watchObservedRunningTime="2024-10-08 18:04:13.035268123 +0000 UTC m=+113.154298137"
	Oct 08 18:04:13 addons-246349 kubelet[1500]: I1008 18:04:13.990385    1500 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c9137c7-0886-4c7b-9d0e-c3005aa0d173" path="/var/lib/kubelet/pods/2c9137c7-0886-4c7b-9d0e-c3005aa0d173/volumes"
	Oct 08 18:04:15 addons-246349 kubelet[1500]: I1008 18:04:15.990654    1500 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d01453aa-f430-491d-8ae4-e6894b954954" path="/var/lib/kubelet/pods/d01453aa-f430-491d-8ae4-e6894b954954/volumes"
	Oct 08 18:04:17 addons-246349 kubelet[1500]: I1008 18:04:17.986521    1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-827n9" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 18:04:20 addons-246349 kubelet[1500]: I1008 18:04:20.055017    1500 scope.go:117] "RemoveContainer" containerID="9a257d92e4c54f9a9da85f8489c88f371c8db9e8a4da114c886b42f4d58e2207"
	Oct 08 18:04:20 addons-246349 kubelet[1500]: I1008 18:04:20.062682    1500 scope.go:117] "RemoveContainer" containerID="6fdab89be6076baa06806fcb998104b225420d043ec4fbe4fc036f80d01168ac"
	Oct 08 18:04:21 addons-246349 kubelet[1500]: I1008 18:04:21.992538    1500 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7826b1a6-cf6b-4426-a257-d67d2b32e54d" path="/var/lib/kubelet/pods/7826b1a6-cf6b-4426-a257-d67d2b32e54d/volumes"
	Oct 08 18:05:05 addons-246349 kubelet[1500]: I1008 18:05:05.987471    1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-8tr5n" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 18:05:10 addons-246349 kubelet[1500]: I1008 18:05:10.987453    1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5d4vx" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 18:05:20 addons-246349 kubelet[1500]: I1008 18:05:20.137269    1500 scope.go:117] "RemoveContainer" containerID="5803c163e00cf0eee0bd350a8b9db4f15ec0256048cdf96f0db0def1f72dea5a"
	Oct 08 18:05:20 addons-246349 kubelet[1500]: I1008 18:05:20.987034    1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-827n9" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 18:06:22 addons-246349 kubelet[1500]: I1008 18:06:22.986807    1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-827n9" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 18:06:23 addons-246349 kubelet[1500]: I1008 18:06:23.986916    1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-8tr5n" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 18:06:37 addons-246349 kubelet[1500]: I1008 18:06:37.987384    1500 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5d4vx" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [7691651a9469175aa252f47f0093581ef43db330ceb7c331793947033e722a48] <==
	I1008 18:02:31.070261       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 18:02:31.081441       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 18:02:31.081492       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1008 18:02:31.094192       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 18:02:31.096785       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f9f89ce-996e-4cef-a206-5313a963ed8e", APIVersion:"v1", ResourceVersion:"560", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-246349_44e3839c-233d-420a-ab18-92900568c363 became leader
	I1008 18:02:31.096904       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-246349_44e3839c-233d-420a-ab18-92900568c363!
	I1008 18:02:31.197436       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-246349_44e3839c-233d-420a-ab18-92900568c363!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-246349 -n addons-246349
helpers_test.go:261: (dbg) Run:  kubectl --context addons-246349 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-f8ktk ingress-nginx-admission-patch-nm6hq test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-246349 describe pod ingress-nginx-admission-create-f8ktk ingress-nginx-admission-patch-nm6hq test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-246349 describe pod ingress-nginx-admission-create-f8ktk ingress-nginx-admission-patch-nm6hq test-job-nginx-0: exit status 1 (98.049064ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-f8ktk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-nm6hq" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-246349 describe pod ingress-nginx-admission-create-f8ktk ingress-nginx-admission-patch-nm6hq test-job-nginx-0: exit status 1
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-246349 addons disable volcano --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-246349 addons disable volcano --alsologtostderr -v=1: (11.301608714s)
--- FAIL: TestAddons/serial/Volcano (211.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (377.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-265388 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-265388 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m13.705610158s)

                                                
                                                
-- stdout --
	* [old-k8s-version-265388] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-265388" primary control-plane node in "old-k8s-version-265388" cluster
	* Pulling base image v0.0.45-1728382586-19774 ...
	* Restarting existing docker container for "old-k8s-version-265388" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-265388 addons enable metrics-server
	
	* Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:50:06.247085  497176 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:50:06.247289  497176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:50:06.247321  497176 out.go:358] Setting ErrFile to fd 2...
	I1008 18:50:06.247348  497176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:50:06.247632  497176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
	I1008 18:50:06.248049  497176 out.go:352] Setting JSON to false
	I1008 18:50:06.249170  497176 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9155,"bootTime":1728404252,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1008 18:50:06.249283  497176 start.go:139] virtualization:  
	I1008 18:50:06.253051  497176 out.go:177] * [old-k8s-version-265388] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1008 18:50:06.255879  497176 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:50:06.255948  497176 notify.go:220] Checking for updates...
	I1008 18:50:06.262470  497176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:50:06.265204  497176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig
	I1008 18:50:06.267298  497176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube
	I1008 18:50:06.269891  497176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 18:50:06.272634  497176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 18:50:06.275723  497176 config.go:182] Loaded profile config "old-k8s-version-265388": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1008 18:50:06.278883  497176 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1008 18:50:06.281539  497176 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:50:06.309224  497176 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1008 18:50:06.309363  497176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 18:50:06.363351  497176 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:67 SystemTime:2024-10-08 18:50:06.35292636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1008 18:50:06.363474  497176 docker.go:318] overlay module found
	I1008 18:50:06.366450  497176 out.go:177] * Using the docker driver based on existing profile
	I1008 18:50:06.369068  497176 start.go:297] selected driver: docker
	I1008 18:50:06.369085  497176 start.go:901] validating driver "docker" against &{Name:old-k8s-version-265388 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-265388 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:50:06.369204  497176 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:50:06.369967  497176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 18:50:06.419382  497176 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:67 SystemTime:2024-10-08 18:50:06.408954588 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1008 18:50:06.419806  497176 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:50:06.419841  497176 cni.go:84] Creating CNI manager for ""
	I1008 18:50:06.419885  497176 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1008 18:50:06.419931  497176 start.go:340] cluster config:
	{Name:old-k8s-version-265388 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-265388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:50:06.424415  497176 out.go:177] * Starting "old-k8s-version-265388" primary control-plane node in "old-k8s-version-265388" cluster
	I1008 18:50:06.426910  497176 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1008 18:50:06.429759  497176 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1008 18:50:06.432407  497176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1008 18:50:06.432480  497176 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1008 18:50:06.432472  497176 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1008 18:50:06.432493  497176 cache.go:56] Caching tarball of preloaded images
	I1008 18:50:06.432573  497176 preload.go:172] Found /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 18:50:06.432590  497176 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I1008 18:50:06.432723  497176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/config.json ...
	I1008 18:50:06.450852  497176 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon, skipping pull
	I1008 18:50:06.450874  497176 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in daemon, skipping load
	I1008 18:50:06.450891  497176 cache.go:194] Successfully downloaded all kic artifacts
	I1008 18:50:06.450915  497176 start.go:360] acquireMachinesLock for old-k8s-version-265388: {Name:mk14d4e20967ced11383ac7a8a46c09beacc0f12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:50:06.450998  497176 start.go:364] duration metric: took 62.242µs to acquireMachinesLock for "old-k8s-version-265388"
	I1008 18:50:06.451020  497176 start.go:96] Skipping create...Using existing machine configuration
	I1008 18:50:06.451025  497176 fix.go:54] fixHost starting: 
	I1008 18:50:06.451291  497176 cli_runner.go:164] Run: docker container inspect old-k8s-version-265388 --format={{.State.Status}}
	I1008 18:50:06.467591  497176 fix.go:112] recreateIfNeeded on old-k8s-version-265388: state=Stopped err=<nil>
	W1008 18:50:06.467618  497176 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 18:50:06.470611  497176 out.go:177] * Restarting existing docker container for "old-k8s-version-265388" ...
	I1008 18:50:06.473139  497176 cli_runner.go:164] Run: docker start old-k8s-version-265388
	I1008 18:50:06.774831  497176 cli_runner.go:164] Run: docker container inspect old-k8s-version-265388 --format={{.State.Status}}
	I1008 18:50:06.796361  497176 kic.go:430] container "old-k8s-version-265388" state is running.
	I1008 18:50:06.796790  497176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-265388
	I1008 18:50:06.819878  497176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/config.json ...
	I1008 18:50:06.821226  497176 machine.go:93] provisionDockerMachine start ...
	I1008 18:50:06.821848  497176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-265388
	I1008 18:50:06.845253  497176 main.go:141] libmachine: Using SSH client type: native
	I1008 18:50:06.845572  497176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1008 18:50:06.845582  497176 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 18:50:06.846686  497176 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1008 18:50:09.978108  497176 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-265388
	
	I1008 18:50:09.978134  497176 ubuntu.go:169] provisioning hostname "old-k8s-version-265388"
	I1008 18:50:09.978198  497176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-265388
	I1008 18:50:09.996094  497176 main.go:141] libmachine: Using SSH client type: native
	I1008 18:50:09.996343  497176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1008 18:50:09.996362  497176 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-265388 && echo "old-k8s-version-265388" | sudo tee /etc/hostname
	I1008 18:50:10.154476  497176 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-265388
	
	I1008 18:50:10.154558  497176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-265388
	I1008 18:50:10.172007  497176 main.go:141] libmachine: Using SSH client type: native
	I1008 18:50:10.172322  497176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1008 18:50:10.172350  497176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-265388' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-265388/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-265388' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 18:50:10.301817  497176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 18:50:10.301848  497176 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19774-283126/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-283126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-283126/.minikube}
	I1008 18:50:10.301912  497176 ubuntu.go:177] setting up certificates
	I1008 18:50:10.301922  497176 provision.go:84] configureAuth start
	I1008 18:50:10.301991  497176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-265388
	I1008 18:50:10.319043  497176 provision.go:143] copyHostCerts
	I1008 18:50:10.319119  497176 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-283126/.minikube/ca.pem, removing ...
	I1008 18:50:10.319140  497176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-283126/.minikube/ca.pem
	I1008 18:50:10.319229  497176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-283126/.minikube/ca.pem (1078 bytes)
	I1008 18:50:10.319332  497176 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-283126/.minikube/cert.pem, removing ...
	I1008 18:50:10.319346  497176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-283126/.minikube/cert.pem
	I1008 18:50:10.319374  497176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-283126/.minikube/cert.pem (1123 bytes)
	I1008 18:50:10.319436  497176 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-283126/.minikube/key.pem, removing ...
	I1008 18:50:10.319444  497176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-283126/.minikube/key.pem
	I1008 18:50:10.319469  497176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-283126/.minikube/key.pem (1679 bytes)
	I1008 18:50:10.319530  497176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-283126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-265388 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-265388]
	I1008 18:50:10.626618  497176 provision.go:177] copyRemoteCerts
	I1008 18:50:10.626693  497176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 18:50:10.626740  497176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-265388
	I1008 18:50:10.646063  497176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/old-k8s-version-265388/id_rsa Username:docker}
	I1008 18:50:10.742738  497176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 18:50:10.768630  497176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1008 18:50:10.794521  497176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 18:50:10.819968  497176 provision.go:87] duration metric: took 518.031024ms to configureAuth
	I1008 18:50:10.820005  497176 ubuntu.go:193] setting minikube options for container-runtime
	I1008 18:50:10.820217  497176 config.go:182] Loaded profile config "old-k8s-version-265388": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1008 18:50:10.820232  497176 machine.go:96] duration metric: took 3.998989264s to provisionDockerMachine
	I1008 18:50:10.820241  497176 start.go:293] postStartSetup for "old-k8s-version-265388" (driver="docker")
	I1008 18:50:10.820252  497176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 18:50:10.820319  497176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 18:50:10.820362  497176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-265388
	I1008 18:50:10.837194  497176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/old-k8s-version-265388/id_rsa Username:docker}
	I1008 18:50:10.931148  497176 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 18:50:10.934498  497176 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 18:50:10.934536  497176 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1008 18:50:10.934548  497176 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1008 18:50:10.934555  497176 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1008 18:50:10.934577  497176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-283126/.minikube/addons for local assets ...
	I1008 18:50:10.934636  497176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-283126/.minikube/files for local assets ...
	I1008 18:50:10.934722  497176 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-283126/.minikube/files/etc/ssl/certs/2885412.pem -> 2885412.pem in /etc/ssl/certs
	I1008 18:50:10.934828  497176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 18:50:10.943794  497176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/files/etc/ssl/certs/2885412.pem --> /etc/ssl/certs/2885412.pem (1708 bytes)
	I1008 18:50:10.969186  497176 start.go:296] duration metric: took 148.927477ms for postStartSetup
	I1008 18:50:10.969286  497176 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 18:50:10.969351  497176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-265388
	I1008 18:50:10.986349  497176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/old-k8s-version-265388/id_rsa Username:docker}
	I1008 18:50:11.075179  497176 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 18:50:11.079910  497176 fix.go:56] duration metric: took 4.628876471s for fixHost
	I1008 18:50:11.079938  497176 start.go:83] releasing machines lock for "old-k8s-version-265388", held for 4.628929639s
	I1008 18:50:11.080016  497176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-265388
	I1008 18:50:11.104601  497176 ssh_runner.go:195] Run: cat /version.json
	I1008 18:50:11.104641  497176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 18:50:11.104656  497176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-265388
	I1008 18:50:11.104718  497176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-265388
	I1008 18:50:11.125655  497176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/old-k8s-version-265388/id_rsa Username:docker}
	I1008 18:50:11.135275  497176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/old-k8s-version-265388/id_rsa Username:docker}
	I1008 18:50:11.363835  497176 ssh_runner.go:195] Run: systemctl --version
	I1008 18:50:11.368269  497176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1008 18:50:11.372418  497176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1008 18:50:11.391517  497176 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1008 18:50:11.391646  497176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 18:50:11.400704  497176 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 18:50:11.400728  497176 start.go:495] detecting cgroup driver to use...
	I1008 18:50:11.400761  497176 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 18:50:11.400816  497176 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1008 18:50:11.414842  497176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1008 18:50:11.426916  497176 docker.go:217] disabling cri-docker service (if available) ...
	I1008 18:50:11.426987  497176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 18:50:11.440491  497176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 18:50:11.452609  497176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 18:50:11.544148  497176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 18:50:11.642203  497176 docker.go:233] disabling docker service ...
	I1008 18:50:11.642360  497176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 18:50:11.655917  497176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 18:50:11.667335  497176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 18:50:11.765298  497176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 18:50:11.856264  497176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 18:50:11.868346  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 18:50:11.885837  497176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1008 18:50:11.897214  497176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1008 18:50:11.907835  497176 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1008 18:50:11.907929  497176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1008 18:50:11.919463  497176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 18:50:11.930379  497176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1008 18:50:11.940664  497176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 18:50:11.950661  497176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 18:50:11.959990  497176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1008 18:50:11.970245  497176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 18:50:11.979335  497176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 18:50:11.987788  497176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:50:12.079000  497176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1008 18:50:12.281530  497176 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1008 18:50:12.281695  497176 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1008 18:50:12.287070  497176 start.go:563] Will wait 60s for crictl version
	I1008 18:50:12.287193  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:50:12.291272  497176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 18:50:12.339672  497176 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1008 18:50:12.339817  497176 ssh_runner.go:195] Run: containerd --version
	I1008 18:50:12.370130  497176 ssh_runner.go:195] Run: containerd --version
	I1008 18:50:12.403285  497176 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I1008 18:50:12.405998  497176 cli_runner.go:164] Run: docker network inspect old-k8s-version-265388 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 18:50:12.423239  497176 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1008 18:50:12.426856  497176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 18:50:12.437987  497176 kubeadm.go:883] updating cluster {Name:old-k8s-version-265388 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-265388 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 18:50:12.438118  497176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1008 18:50:12.438181  497176 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:50:12.479923  497176 containerd.go:627] all images are preloaded for containerd runtime.
	I1008 18:50:12.479950  497176 containerd.go:534] Images already preloaded, skipping extraction
	I1008 18:50:12.480044  497176 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:50:12.516259  497176 containerd.go:627] all images are preloaded for containerd runtime.
	I1008 18:50:12.516285  497176 cache_images.go:84] Images are preloaded, skipping loading
	I1008 18:50:12.516294  497176 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I1008 18:50:12.516440  497176 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-265388 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-265388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 18:50:12.516510  497176 ssh_runner.go:195] Run: sudo crictl info
	I1008 18:50:12.559652  497176 cni.go:84] Creating CNI manager for ""
	I1008 18:50:12.559679  497176 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1008 18:50:12.559689  497176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 18:50:12.559709  497176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-265388 NodeName:old-k8s-version-265388 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1008 18:50:12.559841  497176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-265388"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 18:50:12.559915  497176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1008 18:50:12.577097  497176 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 18:50:12.577213  497176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 18:50:12.586355  497176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I1008 18:50:12.606935  497176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 18:50:12.625422  497176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I1008 18:50:12.645505  497176 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1008 18:50:12.649121  497176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 18:50:12.660690  497176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:50:12.749389  497176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:50:12.767330  497176 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388 for IP: 192.168.76.2
	I1008 18:50:12.767354  497176 certs.go:194] generating shared ca certs ...
	I1008 18:50:12.767371  497176 certs.go:226] acquiring lock for ca certs: {Name:mk9b4a4bb626944e2ef6352dc46232c13e820586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:50:12.767569  497176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-283126/.minikube/ca.key
	I1008 18:50:12.767639  497176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.key
	I1008 18:50:12.767655  497176 certs.go:256] generating profile certs ...
	I1008 18:50:12.767774  497176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.key
	I1008 18:50:12.767870  497176 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/apiserver.key.60136463
	I1008 18:50:12.767949  497176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/proxy-client.key
	I1008 18:50:12.768084  497176 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/288541.pem (1338 bytes)
	W1008 18:50:12.768137  497176 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-283126/.minikube/certs/288541_empty.pem, impossibly tiny 0 bytes
	I1008 18:50:12.768154  497176 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 18:50:12.768182  497176 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem (1078 bytes)
	I1008 18:50:12.768242  497176 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/cert.pem (1123 bytes)
	I1008 18:50:12.768272  497176 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/key.pem (1679 bytes)
	I1008 18:50:12.768347  497176 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/files/etc/ssl/certs/2885412.pem (1708 bytes)
	I1008 18:50:12.769094  497176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 18:50:12.797446  497176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 18:50:12.826472  497176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 18:50:12.864667  497176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1008 18:50:12.895270  497176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1008 18:50:12.925746  497176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 18:50:12.950871  497176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 18:50:12.977512  497176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 18:50:13.005461  497176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/files/etc/ssl/certs/2885412.pem --> /usr/share/ca-certificates/2885412.pem (1708 bytes)
	I1008 18:50:13.031843  497176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 18:50:13.059361  497176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/certs/288541.pem --> /usr/share/ca-certificates/288541.pem (1338 bytes)
	I1008 18:50:13.084678  497176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 18:50:13.107013  497176 ssh_runner.go:195] Run: openssl version
	I1008 18:50:13.114230  497176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288541.pem && ln -fs /usr/share/ca-certificates/288541.pem /etc/ssl/certs/288541.pem"
	I1008 18:50:13.125150  497176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/288541.pem
	I1008 18:50:13.128799  497176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 18:10 /usr/share/ca-certificates/288541.pem
	I1008 18:50:13.128871  497176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288541.pem
	I1008 18:50:13.135872  497176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288541.pem /etc/ssl/certs/51391683.0"
	I1008 18:50:13.145401  497176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2885412.pem && ln -fs /usr/share/ca-certificates/2885412.pem /etc/ssl/certs/2885412.pem"
	I1008 18:50:13.155419  497176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2885412.pem
	I1008 18:50:13.159341  497176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 18:10 /usr/share/ca-certificates/2885412.pem
	I1008 18:50:13.159429  497176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2885412.pem
	I1008 18:50:13.166523  497176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2885412.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 18:50:13.175536  497176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 18:50:13.185751  497176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:50:13.189413  497176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 18:02 /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:50:13.189489  497176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:50:13.196745  497176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 18:50:13.206226  497176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 18:50:13.209901  497176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 18:50:13.216955  497176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 18:50:13.224195  497176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 18:50:13.231333  497176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 18:50:13.238699  497176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 18:50:13.245665  497176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 18:50:13.252603  497176 kubeadm.go:392] StartCluster: {Name:old-k8s-version-265388 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-265388 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:50:13.252703  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1008 18:50:13.252774  497176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 18:50:13.295803  497176 cri.go:89] found id: "6dbfaa42aabe649fb0ff5e09f3e02a0a105954ce395099a350087d2d7e798c12"
	I1008 18:50:13.295824  497176 cri.go:89] found id: "d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e"
	I1008 18:50:13.295830  497176 cri.go:89] found id: "feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51"
	I1008 18:50:13.295834  497176 cri.go:89] found id: "3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52"
	I1008 18:50:13.295837  497176 cri.go:89] found id: "50bb48e19abbbf8a078385682483bd81d003b5b3c0faa309434777c9d5aee948"
	I1008 18:50:13.295841  497176 cri.go:89] found id: "08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9"
	I1008 18:50:13.295845  497176 cri.go:89] found id: "22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e"
	I1008 18:50:13.295848  497176 cri.go:89] found id: "9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2"
	I1008 18:50:13.295875  497176 cri.go:89] found id: "b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3"
	I1008 18:50:13.295882  497176 cri.go:89] found id: ""
	I1008 18:50:13.295957  497176 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1008 18:50:13.308332  497176 cri.go:116] JSON = null
	W1008 18:50:13.308382  497176 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 9
	I1008 18:50:13.308440  497176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 18:50:13.317112  497176 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 18:50:13.317131  497176 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 18:50:13.317186  497176 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 18:50:13.326866  497176 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 18:50:13.327520  497176 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-265388" does not appear in /home/jenkins/minikube-integration/19774-283126/kubeconfig
	I1008 18:50:13.327816  497176 kubeconfig.go:62] /home/jenkins/minikube-integration/19774-283126/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-265388" cluster setting kubeconfig missing "old-k8s-version-265388" context setting]
	I1008 18:50:13.328263  497176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/kubeconfig: {Name:mkc40596aa3771ba8a6c8897a16b531991d7bc01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:50:13.329642  497176 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 18:50:13.339854  497176 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I1008 18:50:13.339934  497176 kubeadm.go:597] duration metric: took 22.796793ms to restartPrimaryControlPlane
	I1008 18:50:13.339960  497176 kubeadm.go:394] duration metric: took 87.366976ms to StartCluster
	I1008 18:50:13.339995  497176 settings.go:142] acquiring lock: {Name:mk88999f347ab2e93b53f54a6e8df12c27df7c8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:50:13.340073  497176 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-283126/kubeconfig
	I1008 18:50:13.340992  497176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/kubeconfig: {Name:mkc40596aa3771ba8a6c8897a16b531991d7bc01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:50:13.341218  497176 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1008 18:50:13.341537  497176 config.go:182] Loaded profile config "old-k8s-version-265388": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1008 18:50:13.341589  497176 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 18:50:13.341767  497176 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-265388"
	I1008 18:50:13.341790  497176 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-265388"
	W1008 18:50:13.341797  497176 addons.go:243] addon storage-provisioner should already be in state true
	I1008 18:50:13.341824  497176 host.go:66] Checking if "old-k8s-version-265388" exists ...
	I1008 18:50:13.342338  497176 cli_runner.go:164] Run: docker container inspect old-k8s-version-265388 --format={{.State.Status}}
	I1008 18:50:13.342476  497176 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-265388"
	I1008 18:50:13.342496  497176 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-265388"
	I1008 18:50:13.342731  497176 cli_runner.go:164] Run: docker container inspect old-k8s-version-265388 --format={{.State.Status}}
	I1008 18:50:13.342848  497176 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-265388"
	I1008 18:50:13.342871  497176 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-265388"
	W1008 18:50:13.342878  497176 addons.go:243] addon metrics-server should already be in state true
	I1008 18:50:13.342910  497176 host.go:66] Checking if "old-k8s-version-265388" exists ...
	I1008 18:50:13.343390  497176 cli_runner.go:164] Run: docker container inspect old-k8s-version-265388 --format={{.State.Status}}
	I1008 18:50:13.345806  497176 addons.go:69] Setting dashboard=true in profile "old-k8s-version-265388"
	I1008 18:50:13.345829  497176 addons.go:234] Setting addon dashboard=true in "old-k8s-version-265388"
	W1008 18:50:13.345837  497176 addons.go:243] addon dashboard should already be in state true
	I1008 18:50:13.345877  497176 host.go:66] Checking if "old-k8s-version-265388" exists ...
	I1008 18:50:13.346444  497176 cli_runner.go:164] Run: docker container inspect old-k8s-version-265388 --format={{.State.Status}}
	I1008 18:50:13.346443  497176 out.go:177] * Verifying Kubernetes components...
	I1008 18:50:13.349269  497176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:50:13.382411  497176 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-265388"
	W1008 18:50:13.382437  497176 addons.go:243] addon default-storageclass should already be in state true
	I1008 18:50:13.382463  497176 host.go:66] Checking if "old-k8s-version-265388" exists ...
	I1008 18:50:13.382881  497176 cli_runner.go:164] Run: docker container inspect old-k8s-version-265388 --format={{.State.Status}}
	I1008 18:50:13.383525  497176 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1008 18:50:13.391423  497176 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1008 18:50:13.394370  497176 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1008 18:50:13.394407  497176 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1008 18:50:13.394488  497176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-265388
	I1008 18:50:13.407682  497176 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 18:50:13.410292  497176 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 18:50:13.410318  497176 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 18:50:13.410393  497176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-265388
	I1008 18:50:13.415012  497176 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 18:50:13.417713  497176 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 18:50:13.417736  497176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 18:50:13.417799  497176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-265388
	I1008 18:50:13.444747  497176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/old-k8s-version-265388/id_rsa Username:docker}
	I1008 18:50:13.462157  497176 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 18:50:13.462178  497176 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 18:50:13.462241  497176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-265388
	I1008 18:50:13.468508  497176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/old-k8s-version-265388/id_rsa Username:docker}
	I1008 18:50:13.478685  497176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/old-k8s-version-265388/id_rsa Username:docker}
	I1008 18:50:13.512246  497176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/old-k8s-version-265388/id_rsa Username:docker}
	I1008 18:50:13.534650  497176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:50:13.573151  497176 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-265388" to be "Ready" ...
	I1008 18:50:13.591437  497176 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 18:50:13.591509  497176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 18:50:13.603472  497176 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1008 18:50:13.603546  497176 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1008 18:50:13.614642  497176 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 18:50:13.614712  497176 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 18:50:13.631612  497176 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1008 18:50:13.631687  497176 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1008 18:50:13.653069  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 18:50:13.655848  497176 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 18:50:13.655993  497176 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 18:50:13.670014  497176 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1008 18:50:13.670091  497176 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1008 18:50:13.674226  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 18:50:13.706570  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 18:50:13.721433  497176 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1008 18:50:13.721503  497176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1008 18:50:13.798020  497176 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1008 18:50:13.798099  497176 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1008 18:50:13.821512  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:13.821594  497176 retry.go:31] will retry after 185.752085ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1008 18:50:13.839316  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:13.839417  497176 retry.go:31] will retry after 132.347073ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:13.852914  497176 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1008 18:50:13.852993  497176 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W1008 18:50:13.875380  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:13.875422  497176 retry.go:31] will retry after 332.746392ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:13.875948  497176 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1008 18:50:13.875968  497176 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1008 18:50:13.895190  497176 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1008 18:50:13.895218  497176 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1008 18:50:13.919905  497176 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 18:50:13.919941  497176 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1008 18:50:13.939938  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 18:50:13.972202  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 18:50:14.007634  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 18:50:14.048135  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:14.048171  497176 retry.go:31] will retry after 167.272524ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1008 18:50:14.082384  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:14.082419  497176 retry.go:31] will retry after 322.524572ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1008 18:50:14.125125  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:14.125161  497176 retry.go:31] will retry after 438.107676ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:14.208424  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 18:50:14.215837  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1008 18:50:14.303336  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:14.303408  497176 retry.go:31] will retry after 220.021689ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1008 18:50:14.321956  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:14.321991  497176 retry.go:31] will retry after 355.560802ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:14.405140  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 18:50:14.479254  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:14.479287  497176 retry.go:31] will retry after 803.646331ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:14.524470  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 18:50:14.564082  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 18:50:14.624497  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:14.624608  497176 retry.go:31] will retry after 490.764948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1008 18:50:14.663594  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:14.663624  497176 retry.go:31] will retry after 508.492307ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:14.677889  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1008 18:50:14.750485  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:14.750519  497176 retry.go:31] will retry after 382.714499ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:15.116575  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 18:50:15.134335  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 18:50:15.172875  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 18:50:15.248841  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:15.248916  497176 retry.go:31] will retry after 624.219062ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1008 18:50:15.262246  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:15.262281  497176 retry.go:31] will retry after 1.009582173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:15.283624  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 18:50:15.314983  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:15.315069  497176 retry.go:31] will retry after 1.005691041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1008 18:50:15.363246  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:15.363293  497176 retry.go:31] will retry after 884.422613ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:15.573780  497176 node_ready.go:53] error getting node "old-k8s-version-265388": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-265388": dial tcp 192.168.76.2:8443: connect: connection refused
	I1008 18:50:15.874274  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1008 18:50:15.960447  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:15.960530  497176 retry.go:31] will retry after 857.492907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:16.248562  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 18:50:16.273020  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 18:50:16.321523  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 18:50:16.335529  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:16.335564  497176 retry.go:31] will retry after 864.76887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1008 18:50:16.386985  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:16.387022  497176 retry.go:31] will retry after 1.573512074s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1008 18:50:16.415284  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:16.415350  497176 retry.go:31] will retry after 1.75239958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:16.818194  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1008 18:50:16.891268  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:16.891303  497176 retry.go:31] will retry after 2.577761427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:17.201390  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 18:50:17.304361  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:17.304443  497176 retry.go:31] will retry after 1.952730566s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:17.574139  497176 node_ready.go:53] error getting node "old-k8s-version-265388": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-265388": dial tcp 192.168.76.2:8443: connect: connection refused
	I1008 18:50:17.961765  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1008 18:50:18.040034  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:18.040070  497176 retry.go:31] will retry after 2.524154948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:18.168870  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 18:50:18.281185  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:18.281219  497176 retry.go:31] will retry after 1.927211414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:19.258059  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1008 18:50:19.339582  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:19.339618  497176 retry.go:31] will retry after 3.476564821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:19.469940  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1008 18:50:19.544472  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:19.544503  497176 retry.go:31] will retry after 2.699120529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:20.074207  497176 node_ready.go:53] error getting node "old-k8s-version-265388": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-265388": dial tcp 192.168.76.2:8443: connect: connection refused
	I1008 18:50:20.209504  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1008 18:50:20.282228  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:20.282264  497176 retry.go:31] will retry after 4.123969785s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:20.564758  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1008 18:50:20.634679  497176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:20.634711  497176 retry.go:31] will retry after 2.378869376s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1008 18:50:22.074522  497176 node_ready.go:53] error getting node "old-k8s-version-265388": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-265388": dial tcp 192.168.76.2:8443: connect: connection refused
	I1008 18:50:22.244674  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 18:50:22.816762  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1008 18:50:23.014258  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 18:50:24.407023  497176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 18:50:30.712930  497176 node_ready.go:49] node "old-k8s-version-265388" has status "Ready":"True"
	I1008 18:50:30.712960  497176 node_ready.go:38] duration metric: took 17.139776421s for node "old-k8s-version-265388" to be "Ready" ...
	I1008 18:50:30.712971  497176 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 18:50:31.093116  497176 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-qc6g5" in "kube-system" namespace to be "Ready" ...
	I1008 18:50:31.281542  497176 pod_ready.go:93] pod "coredns-74ff55c5b-qc6g5" in "kube-system" namespace has status "Ready":"True"
	I1008 18:50:31.281574  497176 pod_ready.go:82] duration metric: took 188.419463ms for pod "coredns-74ff55c5b-qc6g5" in "kube-system" namespace to be "Ready" ...
	I1008 18:50:31.281588  497176 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-265388" in "kube-system" namespace to be "Ready" ...
	I1008 18:50:31.315856  497176 pod_ready.go:93] pod "etcd-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"True"
	I1008 18:50:31.315888  497176 pod_ready.go:82] duration metric: took 34.292151ms for pod "etcd-old-k8s-version-265388" in "kube-system" namespace to be "Ready" ...
	I1008 18:50:31.315904  497176 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-265388" in "kube-system" namespace to be "Ready" ...
	I1008 18:50:33.157297  497176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.912584517s)
	I1008 18:50:33.157336  497176 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-265388"
	I1008 18:50:33.296465  497176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.47966607s)
	I1008 18:50:33.366655  497176 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:50:33.617121  497176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.602804308s)
	I1008 18:50:33.617351  497176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.210245522s)
	I1008 18:50:33.619980  497176 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-265388 addons enable metrics-server
	
	I1008 18:50:33.623764  497176 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
	I1008 18:50:33.626497  497176 addons.go:510] duration metric: took 20.284897046s for enable addons: enabled=[metrics-server default-storageclass storage-provisioner dashboard]
	I1008 18:50:35.822448  497176 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:50:38.322141  497176 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:50:40.352565  497176 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:50:42.323133  497176 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"True"
	I1008 18:50:42.323160  497176 pod_ready.go:82] duration metric: took 11.007248281s for pod "kube-apiserver-old-k8s-version-265388" in "kube-system" namespace to be "Ready" ...
	I1008 18:50:42.323173  497176 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace to be "Ready" ...
	I1008 18:50:44.343173  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:50:46.829533  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:50:48.842976  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:50:51.330127  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:50:53.334369  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:50:55.338108  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:50:57.830701  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:50:59.836046  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:02.329906  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:04.330444  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:06.332994  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:08.830932  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:11.336980  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:13.829769  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:16.329405  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:18.336866  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:20.829865  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:23.330734  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:25.830196  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:28.331005  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:30.838332  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:33.330680  497176 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:35.329628  497176 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"True"
	I1008 18:51:35.329654  497176 pod_ready.go:82] duration metric: took 53.006471832s for pod "kube-controller-manager-old-k8s-version-265388" in "kube-system" namespace to be "Ready" ...
	I1008 18:51:35.329666  497176 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jtkrl" in "kube-system" namespace to be "Ready" ...
	I1008 18:51:35.335456  497176 pod_ready.go:93] pod "kube-proxy-jtkrl" in "kube-system" namespace has status "Ready":"True"
	I1008 18:51:35.335480  497176 pod_ready.go:82] duration metric: took 5.76724ms for pod "kube-proxy-jtkrl" in "kube-system" namespace to be "Ready" ...
	I1008 18:51:35.335491  497176 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-265388" in "kube-system" namespace to be "Ready" ...
	I1008 18:51:37.341749  497176 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:39.342459  497176 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:41.346239  497176 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:43.841506  497176 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:45.842874  497176 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:48.341750  497176 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:50.342998  497176 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:52.841292  497176 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:54.341788  497176 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-265388" in "kube-system" namespace has status "Ready":"True"
	I1008 18:51:54.341813  497176 pod_ready.go:82] duration metric: took 19.006314725s for pod "kube-scheduler-old-k8s-version-265388" in "kube-system" namespace to be "Ready" ...
	I1008 18:51:54.341825  497176 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace to be "Ready" ...
	I1008 18:51:56.348667  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:51:58.852559  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:01.348450  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:03.850454  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:06.348462  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:08.352234  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:10.848313  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:13.350434  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:15.849512  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:18.348665  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:20.348699  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:22.349329  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:24.848629  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:27.348105  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:29.348215  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:31.847972  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:33.848197  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:35.849165  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:37.851334  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:40.348209  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:42.348361  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:44.847854  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:46.849497  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:48.850439  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:51.349952  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:53.848007  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:56.348114  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:52:58.348978  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:00.349757  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:02.849231  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:05.348527  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:07.348907  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:09.847773  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:11.848149  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:13.848450  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:16.347898  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:18.348368  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:20.847758  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:22.847986  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:24.853451  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:27.348951  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:29.847888  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:31.877157  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:34.349345  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:36.848197  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:38.850085  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:41.348928  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:43.847379  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:45.848937  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:48.348202  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:50.856048  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:53.348370  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:55.349297  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:53:57.848779  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:00.349352  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:02.848139  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:04.848359  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:06.849718  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:08.850697  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:11.348710  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:13.348849  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:15.848976  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:17.849760  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:20.347277  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:22.348242  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:24.848223  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:26.849024  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:28.853934  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:31.347172  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:33.348638  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:35.349952  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:37.848459  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:39.848821  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:41.856247  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:44.347818  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:46.348494  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:48.852301  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:51.348357  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:53.856833  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:56.348935  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:54:58.850652  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:01.350634  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:03.848140  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:05.848606  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:08.348532  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:10.349538  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:12.848247  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:14.848505  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:16.848825  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:18.857825  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:21.349256  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:23.847836  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:25.848795  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:27.848955  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:29.849650  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:31.850442  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:34.348659  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:36.350867  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:38.850850  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:41.349508  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:43.351202  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:45.352641  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:47.850485  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:49.851395  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:52.349040  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:54.348425  497176 pod_ready.go:82] duration metric: took 4m0.006587025s for pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace to be "Ready" ...
	E1008 18:55:54.348455  497176 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1008 18:55:54.348465  497176 pod_ready.go:39] duration metric: took 5m23.635482712s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 18:55:54.348477  497176 api_server.go:52] waiting for apiserver process to appear ...
	I1008 18:55:54.348510  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1008 18:55:54.348572  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 18:55:54.426988  497176 cri.go:89] found id: "8b125c50b8db07d1a9e5baefe860fc69bcedae71866b8a3970f4a3af946deb47"
	I1008 18:55:54.427011  497176 cri.go:89] found id: "9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2"
	I1008 18:55:54.427016  497176 cri.go:89] found id: ""
	I1008 18:55:54.427023  497176 logs.go:282] 2 containers: [8b125c50b8db07d1a9e5baefe860fc69bcedae71866b8a3970f4a3af946deb47 9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2]
	I1008 18:55:54.427095  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.434289  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.438042  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1008 18:55:54.438116  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 18:55:54.534104  497176 cri.go:89] found id: "67a2f779038d1fc2e1c2a1a0dd241b17fbbf68d3c1ff9aabf2f025cb27cde4ec"
	I1008 18:55:54.534131  497176 cri.go:89] found id: "b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3"
	I1008 18:55:54.534137  497176 cri.go:89] found id: ""
	I1008 18:55:54.534144  497176 logs.go:282] 2 containers: [67a2f779038d1fc2e1c2a1a0dd241b17fbbf68d3c1ff9aabf2f025cb27cde4ec b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3]
	I1008 18:55:54.534203  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.538467  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.542190  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1008 18:55:54.542266  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 18:55:54.608636  497176 cri.go:89] found id: "c84a70362114befb3c291913d2dccae01be8cebac20b41f21c12a7b8b2a49579"
	I1008 18:55:54.608662  497176 cri.go:89] found id: "d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e"
	I1008 18:55:54.608668  497176 cri.go:89] found id: ""
	I1008 18:55:54.608675  497176 logs.go:282] 2 containers: [c84a70362114befb3c291913d2dccae01be8cebac20b41f21c12a7b8b2a49579 d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e]
	I1008 18:55:54.608733  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.612269  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.617761  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1008 18:55:54.617831  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 18:55:54.682939  497176 cri.go:89] found id: "3eaac32bdc9ebfcf342b5d3049e7d5c933d8cee0ffc2a3b347619cee2bab6650"
	I1008 18:55:54.682965  497176 cri.go:89] found id: "22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e"
	I1008 18:55:54.682969  497176 cri.go:89] found id: ""
	I1008 18:55:54.682977  497176 logs.go:282] 2 containers: [3eaac32bdc9ebfcf342b5d3049e7d5c933d8cee0ffc2a3b347619cee2bab6650 22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e]
	I1008 18:55:54.683030  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.690302  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.694724  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1008 18:55:54.694807  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 18:55:54.764434  497176 cri.go:89] found id: "5bc796ead04f026f0fab0416d7431839d26474c4f270223932ec7177ff050e29"
	I1008 18:55:54.764461  497176 cri.go:89] found id: "3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52"
	I1008 18:55:54.764472  497176 cri.go:89] found id: ""
	I1008 18:55:54.764478  497176 logs.go:282] 2 containers: [5bc796ead04f026f0fab0416d7431839d26474c4f270223932ec7177ff050e29 3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52]
	I1008 18:55:54.764549  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.774003  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.784426  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 18:55:54.784510  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 18:55:54.868383  497176 cri.go:89] found id: "d454ac385124a434ff99a3a89b796978a0925f0d09f2b44fc3e895b7402a54de"
	I1008 18:55:54.868409  497176 cri.go:89] found id: "08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9"
	I1008 18:55:54.868415  497176 cri.go:89] found id: ""
	I1008 18:55:54.868423  497176 logs.go:282] 2 containers: [d454ac385124a434ff99a3a89b796978a0925f0d09f2b44fc3e895b7402a54de 08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9]
	I1008 18:55:54.868478  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.872165  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.875415  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1008 18:55:54.875492  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 18:55:54.983204  497176 cri.go:89] found id: "afe755f054498e79e1505269d3e51e49df21447fc15ed828fde47cfb3eb20b15"
	I1008 18:55:54.983223  497176 cri.go:89] found id: "feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51"
	I1008 18:55:54.983235  497176 cri.go:89] found id: ""
	I1008 18:55:54.983243  497176 logs.go:282] 2 containers: [afe755f054498e79e1505269d3e51e49df21447fc15ed828fde47cfb3eb20b15 feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51]
	I1008 18:55:54.983298  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.994273  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.998172  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 18:55:54.998243  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 18:55:55.064003  497176 cri.go:89] found id: "4730beee8eb17ce1d3b22a81dc0769170618efae5265a4a18e70cbbc4b8c4bff"
	I1008 18:55:55.064079  497176 cri.go:89] found id: ""
	I1008 18:55:55.064102  497176 logs.go:282] 1 containers: [4730beee8eb17ce1d3b22a81dc0769170618efae5265a4a18e70cbbc4b8c4bff]
	I1008 18:55:55.064192  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:55.068413  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1008 18:55:55.068537  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 18:55:55.125442  497176 cri.go:89] found id: "54bd3f1bc6af7c9dea9e44663dba92abed55601cd24da5f4327db49bd763fec7"
	I1008 18:55:55.125527  497176 cri.go:89] found id: "b59eb35fd652d4e3f35621866ce9f731d776c0389f2931af23b8aa380fa850a1"
	I1008 18:55:55.125547  497176 cri.go:89] found id: ""
	I1008 18:55:55.125571  497176 logs.go:282] 2 containers: [54bd3f1bc6af7c9dea9e44663dba92abed55601cd24da5f4327db49bd763fec7 b59eb35fd652d4e3f35621866ce9f731d776c0389f2931af23b8aa380fa850a1]
	I1008 18:55:55.125656  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:55.129377  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:55.132685  497176 logs.go:123] Gathering logs for kubelet ...
	I1008 18:55:55.132749  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 18:55:55.204498  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.728573     661 reflector.go:138] object-"kube-system"/"kindnet-token-5g4mc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-5g4mc" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.204731  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.729238     661 reflector.go:138] object-"kube-system"/"coredns-token-zcdnl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-zcdnl" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.204951  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.729407     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-szd4x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-szd4x" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.205182  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.729538     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-w5946": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-w5946" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.205402  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.729716     661 reflector.go:138] object-"kube-system"/"metrics-server-token-x2kc9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-x2kc9" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.205606  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.748500     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.205912  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.748741     661 reflector.go:138] object-"default"/"default-token-l5v6w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-l5v6w" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.206141  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.749595     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.215664  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:33 old-k8s-version-265388 kubelet[661]: E1008 18:50:33.549244     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:55:55.215870  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:34 old-k8s-version-265388 kubelet[661]: E1008 18:50:34.447550     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.218711  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:47 old-k8s-version-265388 kubelet[661]: E1008 18:50:47.276856     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:55:55.219167  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:47 old-k8s-version-265388 kubelet[661]: E1008 18:50:47.506612     661 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-pqd5j": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-pqd5j" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.220741  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:58 old-k8s-version-265388 kubelet[661]: E1008 18:50:58.301882     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.221540  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:02 old-k8s-version-265388 kubelet[661]: E1008 18:51:02.574007     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.222010  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:03 old-k8s-version-265388 kubelet[661]: E1008 18:51:03.579128     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.222445  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:04 old-k8s-version-265388 kubelet[661]: E1008 18:51:04.583788     661 pod_workers.go:191] Error syncing pod 26175fac-5bc1-416f-b866-36430292c80d ("storage-provisioner_kube-system(26175fac-5bc1-416f-b866-36430292c80d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(26175fac-5bc1-416f-b866-36430292c80d)"
	W1008 18:55:55.222767  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:08 old-k8s-version-265388 kubelet[661]: E1008 18:51:08.571898     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.225526  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:13 old-k8s-version-265388 kubelet[661]: E1008 18:51:13.298924     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:55:55.226312  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:21 old-k8s-version-265388 kubelet[661]: E1008 18:51:21.661055     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.226521  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:25 old-k8s-version-265388 kubelet[661]: E1008 18:51:25.268458     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.226866  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:28 old-k8s-version-265388 kubelet[661]: E1008 18:51:28.571883     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.227217  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:39 old-k8s-version-265388 kubelet[661]: E1008 18:51:39.268329     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.227431  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:39 old-k8s-version-265388 kubelet[661]: E1008 18:51:39.273781     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.228098  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:51 old-k8s-version-265388 kubelet[661]: E1008 18:51:51.739290     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.228305  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:52 old-k8s-version-265388 kubelet[661]: E1008 18:51:52.268075     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.228654  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:58 old-k8s-version-265388 kubelet[661]: E1008 18:51:58.572095     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.231131  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:06 old-k8s-version-265388 kubelet[661]: E1008 18:52:06.276619     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:55:55.231493  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:12 old-k8s-version-265388 kubelet[661]: E1008 18:52:12.267768     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.231702  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:21 old-k8s-version-265388 kubelet[661]: E1008 18:52:21.271213     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.232057  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:24 old-k8s-version-265388 kubelet[661]: E1008 18:52:24.268448     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.232267  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:35 old-k8s-version-265388 kubelet[661]: E1008 18:52:35.268779     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.232880  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:37 old-k8s-version-265388 kubelet[661]: E1008 18:52:37.854127     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.233241  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:38 old-k8s-version-265388 kubelet[661]: E1008 18:52:38.857799     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.233483  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:48 old-k8s-version-265388 kubelet[661]: E1008 18:52:48.268171     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.233848  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:50 old-k8s-version-265388 kubelet[661]: E1008 18:52:50.268221     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.234080  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:59 old-k8s-version-265388 kubelet[661]: E1008 18:52:59.268518     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.234427  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:02 old-k8s-version-265388 kubelet[661]: E1008 18:53:02.267840     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.234631  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:12 old-k8s-version-265388 kubelet[661]: E1008 18:53:12.268232     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.241152  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:13 old-k8s-version-265388 kubelet[661]: E1008 18:53:13.267801     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.241395  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:25 old-k8s-version-265388 kubelet[661]: E1008 18:53:25.268175     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.241755  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:27 old-k8s-version-265388 kubelet[661]: E1008 18:53:27.267995     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.244233  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:39 old-k8s-version-265388 kubelet[661]: E1008 18:53:39.279009     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:55:55.244583  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:42 old-k8s-version-265388 kubelet[661]: E1008 18:53:42.267851     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.244792  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:51 old-k8s-version-265388 kubelet[661]: E1008 18:53:51.268831     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.245167  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:56 old-k8s-version-265388 kubelet[661]: E1008 18:53:56.267856     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.245376  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:06 old-k8s-version-265388 kubelet[661]: E1008 18:54:06.268365     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.246041  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:11 old-k8s-version-265388 kubelet[661]: E1008 18:54:11.101872     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.246407  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:18 old-k8s-version-265388 kubelet[661]: E1008 18:54:18.571932     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.246614  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:21 old-k8s-version-265388 kubelet[661]: E1008 18:54:21.268711     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.246962  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:29 old-k8s-version-265388 kubelet[661]: E1008 18:54:29.268265     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.247172  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:36 old-k8s-version-265388 kubelet[661]: E1008 18:54:36.268342     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.247535  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:41 old-k8s-version-265388 kubelet[661]: E1008 18:54:41.268698     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.247748  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:49 old-k8s-version-265388 kubelet[661]: E1008 18:54:49.268273     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.248094  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:53 old-k8s-version-265388 kubelet[661]: E1008 18:54:53.268289     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.248302  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:03 old-k8s-version-265388 kubelet[661]: E1008 18:55:03.269930     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.248655  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:06 old-k8s-version-265388 kubelet[661]: E1008 18:55:06.268627     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.248867  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:14 old-k8s-version-265388 kubelet[661]: E1008 18:55:14.268646     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.249222  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:17 old-k8s-version-265388 kubelet[661]: E1008 18:55:17.269172     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.249597  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:29 old-k8s-version-265388 kubelet[661]: E1008 18:55:29.269526     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.249804  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:29 old-k8s-version-265388 kubelet[661]: E1008 18:55:29.273975     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.249987  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:40 old-k8s-version-265388 kubelet[661]: E1008 18:55:40.268227     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.250313  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:41 old-k8s-version-265388 kubelet[661]: E1008 18:55:41.268178     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.250634  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:53 old-k8s-version-265388 kubelet[661]: E1008 18:55:53.270959     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	I1008 18:55:55.250643  497176 logs.go:123] Gathering logs for kube-apiserver [8b125c50b8db07d1a9e5baefe860fc69bcedae71866b8a3970f4a3af946deb47] ...
	I1008 18:55:55.250658  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b125c50b8db07d1a9e5baefe860fc69bcedae71866b8a3970f4a3af946deb47"
	I1008 18:55:55.352772  497176 logs.go:123] Gathering logs for etcd [67a2f779038d1fc2e1c2a1a0dd241b17fbbf68d3c1ff9aabf2f025cb27cde4ec] ...
	I1008 18:55:55.352807  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67a2f779038d1fc2e1c2a1a0dd241b17fbbf68d3c1ff9aabf2f025cb27cde4ec"
	I1008 18:55:55.444919  497176 logs.go:123] Gathering logs for etcd [b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3] ...
	I1008 18:55:55.444994  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3"
	I1008 18:55:55.519917  497176 logs.go:123] Gathering logs for kindnet [afe755f054498e79e1505269d3e51e49df21447fc15ed828fde47cfb3eb20b15] ...
	I1008 18:55:55.520149  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afe755f054498e79e1505269d3e51e49df21447fc15ed828fde47cfb3eb20b15"
	I1008 18:55:55.587556  497176 logs.go:123] Gathering logs for container status ...
	I1008 18:55:55.587627  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 18:55:55.665350  497176 logs.go:123] Gathering logs for dmesg ...
	I1008 18:55:55.665378  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 18:55:55.697341  497176 logs.go:123] Gathering logs for kube-apiserver [9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2] ...
	I1008 18:55:55.697378  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2"
	I1008 18:55:55.803526  497176 logs.go:123] Gathering logs for kube-proxy [3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52] ...
	I1008 18:55:55.803559  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52"
	I1008 18:55:55.880986  497176 logs.go:123] Gathering logs for kube-controller-manager [d454ac385124a434ff99a3a89b796978a0925f0d09f2b44fc3e895b7402a54de] ...
	I1008 18:55:55.881062  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d454ac385124a434ff99a3a89b796978a0925f0d09f2b44fc3e895b7402a54de"
	I1008 18:55:56.010386  497176 logs.go:123] Gathering logs for kindnet [feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51] ...
	I1008 18:55:56.010423  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51"
	I1008 18:55:56.082261  497176 logs.go:123] Gathering logs for containerd ...
	I1008 18:55:56.082295  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1008 18:55:56.159479  497176 logs.go:123] Gathering logs for describe nodes ...
	I1008 18:55:56.159556  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 18:55:56.420743  497176 logs.go:123] Gathering logs for kube-scheduler [3eaac32bdc9ebfcf342b5d3049e7d5c933d8cee0ffc2a3b347619cee2bab6650] ...
	I1008 18:55:56.420775  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eaac32bdc9ebfcf342b5d3049e7d5c933d8cee0ffc2a3b347619cee2bab6650"
	I1008 18:55:56.494207  497176 logs.go:123] Gathering logs for kube-scheduler [22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e] ...
	I1008 18:55:56.494235  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e"
	I1008 18:55:56.559318  497176 logs.go:123] Gathering logs for kube-proxy [5bc796ead04f026f0fab0416d7431839d26474c4f270223932ec7177ff050e29] ...
	I1008 18:55:56.559390  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bc796ead04f026f0fab0416d7431839d26474c4f270223932ec7177ff050e29"
	I1008 18:55:56.641366  497176 logs.go:123] Gathering logs for kube-controller-manager [08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9] ...
	I1008 18:55:56.641439  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9"
	I1008 18:55:56.740485  497176 logs.go:123] Gathering logs for storage-provisioner [b59eb35fd652d4e3f35621866ce9f731d776c0389f2931af23b8aa380fa850a1] ...
	I1008 18:55:56.740568  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b59eb35fd652d4e3f35621866ce9f731d776c0389f2931af23b8aa380fa850a1"
	I1008 18:55:56.808321  497176 logs.go:123] Gathering logs for coredns [c84a70362114befb3c291913d2dccae01be8cebac20b41f21c12a7b8b2a49579] ...
	I1008 18:55:56.808349  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c84a70362114befb3c291913d2dccae01be8cebac20b41f21c12a7b8b2a49579"
	I1008 18:55:56.861793  497176 logs.go:123] Gathering logs for coredns [d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e] ...
	I1008 18:55:56.861864  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e"
	I1008 18:55:56.926327  497176 logs.go:123] Gathering logs for kubernetes-dashboard [4730beee8eb17ce1d3b22a81dc0769170618efae5265a4a18e70cbbc4b8c4bff] ...
	I1008 18:55:56.926405  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4730beee8eb17ce1d3b22a81dc0769170618efae5265a4a18e70cbbc4b8c4bff"
	I1008 18:55:57.010485  497176 logs.go:123] Gathering logs for storage-provisioner [54bd3f1bc6af7c9dea9e44663dba92abed55601cd24da5f4327db49bd763fec7] ...
	I1008 18:55:57.010570  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54bd3f1bc6af7c9dea9e44663dba92abed55601cd24da5f4327db49bd763fec7"
	I1008 18:55:57.080247  497176 out.go:358] Setting ErrFile to fd 2...
	I1008 18:55:57.080318  497176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 18:55:57.080398  497176 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1008 18:55:57.080444  497176 out.go:270]   Oct 08 18:55:29 old-k8s-version-265388 kubelet[661]: E1008 18:55:29.269526     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	  Oct 08 18:55:29 old-k8s-version-265388 kubelet[661]: E1008 18:55:29.269526     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:57.080479  497176 out.go:270]   Oct 08 18:55:29 old-k8s-version-265388 kubelet[661]: E1008 18:55:29.273975     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 08 18:55:29 old-k8s-version-265388 kubelet[661]: E1008 18:55:29.273975     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:57.080535  497176 out.go:270]   Oct 08 18:55:40 old-k8s-version-265388 kubelet[661]: E1008 18:55:40.268227     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 08 18:55:40 old-k8s-version-265388 kubelet[661]: E1008 18:55:40.268227     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:57.080615  497176 out.go:270]   Oct 08 18:55:41 old-k8s-version-265388 kubelet[661]: E1008 18:55:41.268178     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	  Oct 08 18:55:41 old-k8s-version-265388 kubelet[661]: E1008 18:55:41.268178     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:57.080661  497176 out.go:270]   Oct 08 18:55:53 old-k8s-version-265388 kubelet[661]: E1008 18:55:53.270959     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	  Oct 08 18:55:53 old-k8s-version-265388 kubelet[661]: E1008 18:55:53.270959     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	I1008 18:55:57.080697  497176 out.go:358] Setting ErrFile to fd 2...
	I1008 18:55:57.080759  497176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:56:07.086382  497176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 18:56:07.115541  497176 api_server.go:72] duration metric: took 5m53.774286827s to wait for apiserver process to appear ...
	I1008 18:56:07.115566  497176 api_server.go:88] waiting for apiserver healthz status ...
	I1008 18:56:07.115616  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1008 18:56:07.115669  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 18:56:07.197609  497176 cri.go:89] found id: "8b125c50b8db07d1a9e5baefe860fc69bcedae71866b8a3970f4a3af946deb47"
	I1008 18:56:07.197630  497176 cri.go:89] found id: "9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2"
	I1008 18:56:07.197636  497176 cri.go:89] found id: ""
	I1008 18:56:07.197643  497176 logs.go:282] 2 containers: [8b125c50b8db07d1a9e5baefe860fc69bcedae71866b8a3970f4a3af946deb47 9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2]
	I1008 18:56:07.197819  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.202055  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.210623  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1008 18:56:07.210693  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 18:56:07.290879  497176 cri.go:89] found id: "67a2f779038d1fc2e1c2a1a0dd241b17fbbf68d3c1ff9aabf2f025cb27cde4ec"
	I1008 18:56:07.290905  497176 cri.go:89] found id: "b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3"
	I1008 18:56:07.290910  497176 cri.go:89] found id: ""
	I1008 18:56:07.290917  497176 logs.go:282] 2 containers: [67a2f779038d1fc2e1c2a1a0dd241b17fbbf68d3c1ff9aabf2f025cb27cde4ec b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3]
	I1008 18:56:07.290971  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.298486  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.305409  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1008 18:56:07.305487  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 18:56:07.379981  497176 cri.go:89] found id: "c84a70362114befb3c291913d2dccae01be8cebac20b41f21c12a7b8b2a49579"
	I1008 18:56:07.380001  497176 cri.go:89] found id: "d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e"
	I1008 18:56:07.380005  497176 cri.go:89] found id: ""
	I1008 18:56:07.380013  497176 logs.go:282] 2 containers: [c84a70362114befb3c291913d2dccae01be8cebac20b41f21c12a7b8b2a49579 d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e]
	I1008 18:56:07.380074  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.384702  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.388889  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1008 18:56:07.388954  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 18:56:07.498582  497176 cri.go:89] found id: "3eaac32bdc9ebfcf342b5d3049e7d5c933d8cee0ffc2a3b347619cee2bab6650"
	I1008 18:56:07.498602  497176 cri.go:89] found id: "22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e"
	I1008 18:56:07.498606  497176 cri.go:89] found id: ""
	I1008 18:56:07.498614  497176 logs.go:282] 2 containers: [3eaac32bdc9ebfcf342b5d3049e7d5c933d8cee0ffc2a3b347619cee2bab6650 22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e]
	I1008 18:56:07.498668  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.504769  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.509607  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1008 18:56:07.509775  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 18:56:07.581606  497176 cri.go:89] found id: "5bc796ead04f026f0fab0416d7431839d26474c4f270223932ec7177ff050e29"
	I1008 18:56:07.581735  497176 cri.go:89] found id: "3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52"
	I1008 18:56:07.581781  497176 cri.go:89] found id: ""
	I1008 18:56:07.581805  497176 logs.go:282] 2 containers: [5bc796ead04f026f0fab0416d7431839d26474c4f270223932ec7177ff050e29 3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52]
	I1008 18:56:07.581895  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.586372  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.590691  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 18:56:07.590838  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 18:56:07.666361  497176 cri.go:89] found id: "d454ac385124a434ff99a3a89b796978a0925f0d09f2b44fc3e895b7402a54de"
	I1008 18:56:07.666435  497176 cri.go:89] found id: "08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9"
	I1008 18:56:07.666454  497176 cri.go:89] found id: ""
	I1008 18:56:07.666479  497176 logs.go:282] 2 containers: [d454ac385124a434ff99a3a89b796978a0925f0d09f2b44fc3e895b7402a54de 08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9]
	I1008 18:56:07.666568  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.673975  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.679696  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1008 18:56:07.679825  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 18:56:07.751196  497176 cri.go:89] found id: "afe755f054498e79e1505269d3e51e49df21447fc15ed828fde47cfb3eb20b15"
	I1008 18:56:07.751257  497176 cri.go:89] found id: "feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51"
	I1008 18:56:07.751285  497176 cri.go:89] found id: ""
	I1008 18:56:07.751306  497176 logs.go:282] 2 containers: [afe755f054498e79e1505269d3e51e49df21447fc15ed828fde47cfb3eb20b15 feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51]
	I1008 18:56:07.751392  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.757010  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.762153  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 18:56:07.762270  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 18:56:07.859980  497176 cri.go:89] found id: "4730beee8eb17ce1d3b22a81dc0769170618efae5265a4a18e70cbbc4b8c4bff"
	I1008 18:56:07.860049  497176 cri.go:89] found id: ""
	I1008 18:56:07.860073  497176 logs.go:282] 1 containers: [4730beee8eb17ce1d3b22a81dc0769170618efae5265a4a18e70cbbc4b8c4bff]
	I1008 18:56:07.860157  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.865322  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1008 18:56:07.865453  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 18:56:07.974577  497176 cri.go:89] found id: "54bd3f1bc6af7c9dea9e44663dba92abed55601cd24da5f4327db49bd763fec7"
	I1008 18:56:07.974649  497176 cri.go:89] found id: "b59eb35fd652d4e3f35621866ce9f731d776c0389f2931af23b8aa380fa850a1"
	I1008 18:56:07.974677  497176 cri.go:89] found id: ""
	I1008 18:56:07.974700  497176 logs.go:282] 2 containers: [54bd3f1bc6af7c9dea9e44663dba92abed55601cd24da5f4327db49bd763fec7 b59eb35fd652d4e3f35621866ce9f731d776c0389f2931af23b8aa380fa850a1]
	I1008 18:56:07.974785  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.982927  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.988764  497176 logs.go:123] Gathering logs for describe nodes ...
	I1008 18:56:07.988838  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 18:56:08.241929  497176 logs.go:123] Gathering logs for kindnet [feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51] ...
	I1008 18:56:08.241964  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51"
	I1008 18:56:08.305129  497176 logs.go:123] Gathering logs for kube-controller-manager [d454ac385124a434ff99a3a89b796978a0925f0d09f2b44fc3e895b7402a54de] ...
	I1008 18:56:08.305159  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d454ac385124a434ff99a3a89b796978a0925f0d09f2b44fc3e895b7402a54de"
	I1008 18:56:08.435945  497176 logs.go:123] Gathering logs for kindnet [afe755f054498e79e1505269d3e51e49df21447fc15ed828fde47cfb3eb20b15] ...
	I1008 18:56:08.436018  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afe755f054498e79e1505269d3e51e49df21447fc15ed828fde47cfb3eb20b15"
	I1008 18:56:08.516643  497176 logs.go:123] Gathering logs for kubelet ...
	I1008 18:56:08.516717  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 18:56:08.588108  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.728573     661 reflector.go:138] object-"kube-system"/"kindnet-token-5g4mc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-5g4mc" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.588437  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.729238     661 reflector.go:138] object-"kube-system"/"coredns-token-zcdnl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-zcdnl" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.590014  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.729407     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-szd4x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-szd4x" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.590300  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.729538     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-w5946": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-w5946" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.590624  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.729716     661 reflector.go:138] object-"kube-system"/"metrics-server-token-x2kc9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-x2kc9" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.590888  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.748500     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.591137  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.748741     661 reflector.go:138] object-"default"/"default-token-l5v6w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-l5v6w" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.591381  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.749595     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.607949  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:33 old-k8s-version-265388 kubelet[661]: E1008 18:50:33.549244     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:56:08.608472  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:34 old-k8s-version-265388 kubelet[661]: E1008 18:50:34.447550     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.613378  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:47 old-k8s-version-265388 kubelet[661]: E1008 18:50:47.276856     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:56:08.614298  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:47 old-k8s-version-265388 kubelet[661]: E1008 18:50:47.506612     661 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-pqd5j": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-pqd5j" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.616109  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:58 old-k8s-version-265388 kubelet[661]: E1008 18:50:58.301882     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.617085  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:02 old-k8s-version-265388 kubelet[661]: E1008 18:51:02.574007     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.617689  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:03 old-k8s-version-265388 kubelet[661]: E1008 18:51:03.579128     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.622315  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:04 old-k8s-version-265388 kubelet[661]: E1008 18:51:04.583788     661 pod_workers.go:191] Error syncing pod 26175fac-5bc1-416f-b866-36430292c80d ("storage-provisioner_kube-system(26175fac-5bc1-416f-b866-36430292c80d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(26175fac-5bc1-416f-b866-36430292c80d)"
	W1008 18:56:08.622694  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:08 old-k8s-version-265388 kubelet[661]: E1008 18:51:08.571898     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.625651  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:13 old-k8s-version-265388 kubelet[661]: E1008 18:51:13.298924     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:56:08.627906  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:21 old-k8s-version-265388 kubelet[661]: E1008 18:51:21.661055     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.628148  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:25 old-k8s-version-265388 kubelet[661]: E1008 18:51:25.268458     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.628510  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:28 old-k8s-version-265388 kubelet[661]: E1008 18:51:28.571883     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.628863  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:39 old-k8s-version-265388 kubelet[661]: E1008 18:51:39.268329     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.629091  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:39 old-k8s-version-265388 kubelet[661]: E1008 18:51:39.273781     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.629734  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:51 old-k8s-version-265388 kubelet[661]: E1008 18:51:51.739290     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.629948  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:52 old-k8s-version-265388 kubelet[661]: E1008 18:51:52.268075     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.630313  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:58 old-k8s-version-265388 kubelet[661]: E1008 18:51:58.572095     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.635293  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:06 old-k8s-version-265388 kubelet[661]: E1008 18:52:06.276619     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:56:08.635680  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:12 old-k8s-version-265388 kubelet[661]: E1008 18:52:12.267768     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.635901  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:21 old-k8s-version-265388 kubelet[661]: E1008 18:52:21.271213     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.636256  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:24 old-k8s-version-265388 kubelet[661]: E1008 18:52:24.268448     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.636465  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:35 old-k8s-version-265388 kubelet[661]: E1008 18:52:35.268779     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.638588  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:37 old-k8s-version-265388 kubelet[661]: E1008 18:52:37.854127     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.638982  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:38 old-k8s-version-265388 kubelet[661]: E1008 18:52:38.857799     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.639203  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:48 old-k8s-version-265388 kubelet[661]: E1008 18:52:48.268171     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.639557  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:50 old-k8s-version-265388 kubelet[661]: E1008 18:52:50.268221     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.639771  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:59 old-k8s-version-265388 kubelet[661]: E1008 18:52:59.268518     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.640652  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:02 old-k8s-version-265388 kubelet[661]: E1008 18:53:02.267840     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.640888  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:12 old-k8s-version-265388 kubelet[661]: E1008 18:53:12.268232     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.641250  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:13 old-k8s-version-265388 kubelet[661]: E1008 18:53:13.267801     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.641639  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:25 old-k8s-version-265388 kubelet[661]: E1008 18:53:25.268175     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.642022  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:27 old-k8s-version-265388 kubelet[661]: E1008 18:53:27.267995     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.646298  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:39 old-k8s-version-265388 kubelet[661]: E1008 18:53:39.279009     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:56:08.646700  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:42 old-k8s-version-265388 kubelet[661]: E1008 18:53:42.267851     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.646917  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:51 old-k8s-version-265388 kubelet[661]: E1008 18:53:51.268831     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.647280  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:56 old-k8s-version-265388 kubelet[661]: E1008 18:53:56.267856     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.647491  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:06 old-k8s-version-265388 kubelet[661]: E1008 18:54:06.268365     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.648103  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:11 old-k8s-version-265388 kubelet[661]: E1008 18:54:11.101872     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.650134  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:18 old-k8s-version-265388 kubelet[661]: E1008 18:54:18.571932     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.650407  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:21 old-k8s-version-265388 kubelet[661]: E1008 18:54:21.268711     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.650825  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:29 old-k8s-version-265388 kubelet[661]: E1008 18:54:29.268265     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.651043  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:36 old-k8s-version-265388 kubelet[661]: E1008 18:54:36.268342     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.651395  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:41 old-k8s-version-265388 kubelet[661]: E1008 18:54:41.268698     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.651627  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:49 old-k8s-version-265388 kubelet[661]: E1008 18:54:49.268273     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.651995  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:53 old-k8s-version-265388 kubelet[661]: E1008 18:54:53.268289     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.652204  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:03 old-k8s-version-265388 kubelet[661]: E1008 18:55:03.269930     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.652566  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:06 old-k8s-version-265388 kubelet[661]: E1008 18:55:06.268627     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.652776  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:14 old-k8s-version-265388 kubelet[661]: E1008 18:55:14.268646     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.653623  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:17 old-k8s-version-265388 kubelet[661]: E1008 18:55:17.269172     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.654013  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:29 old-k8s-version-265388 kubelet[661]: E1008 18:55:29.269526     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.654234  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:29 old-k8s-version-265388 kubelet[661]: E1008 18:55:29.273975     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.654447  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:40 old-k8s-version-265388 kubelet[661]: E1008 18:55:40.268227     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.655999  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:41 old-k8s-version-265388 kubelet[661]: E1008 18:55:41.268178     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.656374  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:53 old-k8s-version-265388 kubelet[661]: E1008 18:55:53.270959     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.656588  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:55 old-k8s-version-265388 kubelet[661]: E1008 18:55:55.268256     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.657958  497176 logs.go:138] Found kubelet problem: Oct 08 18:56:07 old-k8s-version-265388 kubelet[661]: E1008 18:56:07.275689     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	I1008 18:56:08.658025  497176 logs.go:123] Gathering logs for kube-apiserver [9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2] ...
	I1008 18:56:08.658056  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2"
	I1008 18:56:08.778612  497176 logs.go:123] Gathering logs for etcd [67a2f779038d1fc2e1c2a1a0dd241b17fbbf68d3c1ff9aabf2f025cb27cde4ec] ...
	I1008 18:56:08.778688  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67a2f779038d1fc2e1c2a1a0dd241b17fbbf68d3c1ff9aabf2f025cb27cde4ec"
	I1008 18:56:08.869255  497176 logs.go:123] Gathering logs for coredns [d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e] ...
	I1008 18:56:08.869287  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e"
	I1008 18:56:08.987238  497176 logs.go:123] Gathering logs for kube-scheduler [22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e] ...
	I1008 18:56:08.987263  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e"
	I1008 18:56:09.054555  497176 logs.go:123] Gathering logs for kube-proxy [3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52] ...
	I1008 18:56:09.054631  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52"
	I1008 18:56:09.116876  497176 logs.go:123] Gathering logs for storage-provisioner [54bd3f1bc6af7c9dea9e44663dba92abed55601cd24da5f4327db49bd763fec7] ...
	I1008 18:56:09.116937  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54bd3f1bc6af7c9dea9e44663dba92abed55601cd24da5f4327db49bd763fec7"
	I1008 18:56:09.169481  497176 logs.go:123] Gathering logs for containerd ...
	I1008 18:56:09.169562  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1008 18:56:09.232758  497176 logs.go:123] Gathering logs for coredns [c84a70362114befb3c291913d2dccae01be8cebac20b41f21c12a7b8b2a49579] ...
	I1008 18:56:09.232801  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c84a70362114befb3c291913d2dccae01be8cebac20b41f21c12a7b8b2a49579"
	I1008 18:56:09.294358  497176 logs.go:123] Gathering logs for kube-scheduler [3eaac32bdc9ebfcf342b5d3049e7d5c933d8cee0ffc2a3b347619cee2bab6650] ...
	I1008 18:56:09.294395  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eaac32bdc9ebfcf342b5d3049e7d5c933d8cee0ffc2a3b347619cee2bab6650"
	I1008 18:56:09.335736  497176 logs.go:123] Gathering logs for storage-provisioner [b59eb35fd652d4e3f35621866ce9f731d776c0389f2931af23b8aa380fa850a1] ...
	I1008 18:56:09.335764  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b59eb35fd652d4e3f35621866ce9f731d776c0389f2931af23b8aa380fa850a1"
	I1008 18:56:09.375000  497176 logs.go:123] Gathering logs for container status ...
	I1008 18:56:09.375027  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 18:56:09.452431  497176 logs.go:123] Gathering logs for dmesg ...
	I1008 18:56:09.452575  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 18:56:09.487791  497176 logs.go:123] Gathering logs for kube-apiserver [8b125c50b8db07d1a9e5baefe860fc69bcedae71866b8a3970f4a3af946deb47] ...
	I1008 18:56:09.487825  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b125c50b8db07d1a9e5baefe860fc69bcedae71866b8a3970f4a3af946deb47"
	I1008 18:56:09.577527  497176 logs.go:123] Gathering logs for etcd [b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3] ...
	I1008 18:56:09.577562  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3"
	I1008 18:56:09.653886  497176 logs.go:123] Gathering logs for kube-proxy [5bc796ead04f026f0fab0416d7431839d26474c4f270223932ec7177ff050e29] ...
	I1008 18:56:09.654037  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bc796ead04f026f0fab0416d7431839d26474c4f270223932ec7177ff050e29"
	I1008 18:56:09.714977  497176 logs.go:123] Gathering logs for kube-controller-manager [08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9] ...
	I1008 18:56:09.715057  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9"
	I1008 18:56:09.799165  497176 logs.go:123] Gathering logs for kubernetes-dashboard [4730beee8eb17ce1d3b22a81dc0769170618efae5265a4a18e70cbbc4b8c4bff] ...
	I1008 18:56:09.799242  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4730beee8eb17ce1d3b22a81dc0769170618efae5265a4a18e70cbbc4b8c4bff"
	I1008 18:56:09.847126  497176 out.go:358] Setting ErrFile to fd 2...
	I1008 18:56:09.847153  497176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 18:56:09.847203  497176 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1008 18:56:09.847219  497176 out.go:270]   Oct 08 18:55:40 old-k8s-version-265388 kubelet[661]: E1008 18:55:40.268227     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 08 18:55:40 old-k8s-version-265388 kubelet[661]: E1008 18:55:40.268227     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:09.847227  497176 out.go:270]   Oct 08 18:55:41 old-k8s-version-265388 kubelet[661]: E1008 18:55:41.268178     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	  Oct 08 18:55:41 old-k8s-version-265388 kubelet[661]: E1008 18:55:41.268178     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:09.847235  497176 out.go:270]   Oct 08 18:55:53 old-k8s-version-265388 kubelet[661]: E1008 18:55:53.270959     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	  Oct 08 18:55:53 old-k8s-version-265388 kubelet[661]: E1008 18:55:53.270959     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:09.847249  497176 out.go:270]   Oct 08 18:55:55 old-k8s-version-265388 kubelet[661]: E1008 18:55:55.268256     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 08 18:55:55 old-k8s-version-265388 kubelet[661]: E1008 18:55:55.268256     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:09.847255  497176 out.go:270]   Oct 08 18:56:07 old-k8s-version-265388 kubelet[661]: E1008 18:56:07.275689     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	  Oct 08 18:56:07 old-k8s-version-265388 kubelet[661]: E1008 18:56:07.275689     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	I1008 18:56:09.847261  497176 out.go:358] Setting ErrFile to fd 2...
	I1008 18:56:09.847268  497176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:56:19.849038  497176 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1008 18:56:19.860801  497176 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1008 18:56:19.863849  497176 out.go:201] 
	W1008 18:56:19.866435  497176 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1008 18:56:19.866472  497176 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1008 18:56:19.866489  497176 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1008 18:56:19.866496  497176 out.go:270] * 
	* 
	W1008 18:56:19.867660  497176 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 18:56:19.870754  497176 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-265388 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-265388
helpers_test.go:235: (dbg) docker inspect old-k8s-version-265388:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dda10b1ea35380501a544c0a5f71d086d0690bc921cd1f02f86dd8cb3109e1ff",
	        "Created": "2024-10-08T18:46:51.677598822Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 497375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-08T18:50:06.603444621Z",
	            "FinishedAt": "2024-10-08T18:50:05.690467555Z"
	        },
	        "Image": "sha256:e5ca9b83e048da5ecbd9864892b13b9f06d661ec5eae41590141157c6fe62bf7",
	        "ResolvConfPath": "/var/lib/docker/containers/dda10b1ea35380501a544c0a5f71d086d0690bc921cd1f02f86dd8cb3109e1ff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dda10b1ea35380501a544c0a5f71d086d0690bc921cd1f02f86dd8cb3109e1ff/hostname",
	        "HostsPath": "/var/lib/docker/containers/dda10b1ea35380501a544c0a5f71d086d0690bc921cd1f02f86dd8cb3109e1ff/hosts",
	        "LogPath": "/var/lib/docker/containers/dda10b1ea35380501a544c0a5f71d086d0690bc921cd1f02f86dd8cb3109e1ff/dda10b1ea35380501a544c0a5f71d086d0690bc921cd1f02f86dd8cb3109e1ff-json.log",
	        "Name": "/old-k8s-version-265388",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-265388:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-265388",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/13d02724f804a50cea930235f7e867d6222278ef34567bd098df9bb0bda1c5e2-init/diff:/var/lib/docker/overlay2/211ed394d64374fe90b3e50a914ebed5f9b85a2e1d8650161b42163931148dcb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13d02724f804a50cea930235f7e867d6222278ef34567bd098df9bb0bda1c5e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13d02724f804a50cea930235f7e867d6222278ef34567bd098df9bb0bda1c5e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13d02724f804a50cea930235f7e867d6222278ef34567bd098df9bb0bda1c5e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-265388",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-265388/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-265388",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-265388",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-265388",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "87119171259b4c38e553e3303c952c26a697f0dedaf2308d9b2964efbdebe5c2",
	            "SandboxKey": "/var/run/docker/netns/87119171259b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-265388": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d758cb0e13808f146a535084fea3b1e7f79f455d23235e1868cb84a063ca8f61",
	                    "EndpointID": "cc10c0a9b64b523b02acb90abb80f8b08c1ad8688e4daaea26ecc2ea787f2870",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-265388",
	                        "dda10b1ea353"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-265388 -n old-k8s-version-265388
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-265388 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-265388 logs -n 25: (2.001435046s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-974463                              | cert-expiration-974463   | jenkins | v1.34.0 | 08 Oct 24 18:45 UTC | 08 Oct 24 18:46 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-275989                               | force-systemd-env-275989 | jenkins | v1.34.0 | 08 Oct 24 18:46 UTC | 08 Oct 24 18:46 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-275989                            | force-systemd-env-275989 | jenkins | v1.34.0 | 08 Oct 24 18:46 UTC | 08 Oct 24 18:46 UTC |
	| start   | -p cert-options-178809                                 | cert-options-178809      | jenkins | v1.34.0 | 08 Oct 24 18:46 UTC | 08 Oct 24 18:46 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-178809 ssh                                | cert-options-178809      | jenkins | v1.34.0 | 08 Oct 24 18:46 UTC | 08 Oct 24 18:46 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-178809 -- sudo                         | cert-options-178809      | jenkins | v1.34.0 | 08 Oct 24 18:46 UTC | 08 Oct 24 18:46 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-178809                                 | cert-options-178809      | jenkins | v1.34.0 | 08 Oct 24 18:46 UTC | 08 Oct 24 18:46 UTC |
	| start   | -p old-k8s-version-265388                              | old-k8s-version-265388   | jenkins | v1.34.0 | 08 Oct 24 18:46 UTC | 08 Oct 24 18:49 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-974463                              | cert-expiration-974463   | jenkins | v1.34.0 | 08 Oct 24 18:49 UTC | 08 Oct 24 18:49 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-974463                              | cert-expiration-974463   | jenkins | v1.34.0 | 08 Oct 24 18:49 UTC | 08 Oct 24 18:49 UTC |
	| start   | -p no-preload-351833                                   | no-preload-351833        | jenkins | v1.34.0 | 08 Oct 24 18:49 UTC | 08 Oct 24 18:50 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-265388        | old-k8s-version-265388   | jenkins | v1.34.0 | 08 Oct 24 18:49 UTC | 08 Oct 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-265388                              | old-k8s-version-265388   | jenkins | v1.34.0 | 08 Oct 24 18:49 UTC | 08 Oct 24 18:50 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-265388             | old-k8s-version-265388   | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC | 08 Oct 24 18:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-265388                              | old-k8s-version-265388   | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-351833             | no-preload-351833        | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC | 08 Oct 24 18:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-351833                                   | no-preload-351833        | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC | 08 Oct 24 18:50 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-351833                  | no-preload-351833        | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC | 08 Oct 24 18:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-351833                                   | no-preload-351833        | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC | 08 Oct 24 18:55 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| image   | no-preload-351833 image list                           | no-preload-351833        | jenkins | v1.34.0 | 08 Oct 24 18:55 UTC | 08 Oct 24 18:55 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-351833                                   | no-preload-351833        | jenkins | v1.34.0 | 08 Oct 24 18:55 UTC | 08 Oct 24 18:55 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-351833                                   | no-preload-351833        | jenkins | v1.34.0 | 08 Oct 24 18:55 UTC | 08 Oct 24 18:55 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-351833                                   | no-preload-351833        | jenkins | v1.34.0 | 08 Oct 24 18:55 UTC | 08 Oct 24 18:55 UTC |
	| delete  | -p no-preload-351833                                   | no-preload-351833        | jenkins | v1.34.0 | 08 Oct 24 18:55 UTC | 08 Oct 24 18:55 UTC |
	| start   | -p embed-certs-423092                                  | embed-certs-423092       | jenkins | v1.34.0 | 08 Oct 24 18:55 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 18:55:32
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 18:55:32.734788  506650 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:55:32.734997  506650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:55:32.735025  506650 out.go:358] Setting ErrFile to fd 2...
	I1008 18:55:32.735045  506650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:55:32.735867  506650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
	I1008 18:55:32.736436  506650 out.go:352] Setting JSON to false
	I1008 18:55:32.737542  506650 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9481,"bootTime":1728404252,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1008 18:55:32.737651  506650 start.go:139] virtualization:  
	I1008 18:55:32.740867  506650 out.go:177] * [embed-certs-423092] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1008 18:55:32.743304  506650 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:55:32.743438  506650 notify.go:220] Checking for updates...
	I1008 18:55:32.748409  506650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:55:32.751017  506650 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig
	I1008 18:55:32.753272  506650 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube
	I1008 18:55:32.755729  506650 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 18:55:32.758043  506650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 18:55:32.761088  506650 config.go:182] Loaded profile config "old-k8s-version-265388": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1008 18:55:32.761224  506650 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:55:32.785980  506650 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1008 18:55:32.786114  506650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 18:55:32.860329  506650 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-08 18:55:32.83711257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1008 18:55:32.860430  506650 docker.go:318] overlay module found
	I1008 18:55:32.863750  506650 out.go:177] * Using the docker driver based on user configuration
	I1008 18:55:32.866432  506650 start.go:297] selected driver: docker
	I1008 18:55:32.866447  506650 start.go:901] validating driver "docker" against <nil>
	I1008 18:55:32.866460  506650 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:55:32.867153  506650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 18:55:32.943298  506650 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-08 18:55:32.933560056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1008 18:55:32.943514  506650 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 18:55:32.943741  506650 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:55:32.946186  506650 out.go:177] * Using Docker driver with root privileges
	I1008 18:55:32.948358  506650 cni.go:84] Creating CNI manager for ""
	I1008 18:55:32.948431  506650 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1008 18:55:32.948439  506650 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 18:55:32.948530  506650 start.go:340] cluster config:
	{Name:embed-certs-423092 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-423092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:55:32.952711  506650 out.go:177] * Starting "embed-certs-423092" primary control-plane node in "embed-certs-423092" cluster
	I1008 18:55:32.954991  506650 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1008 18:55:32.957171  506650 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1008 18:55:32.959931  506650 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1008 18:55:32.959979  506650 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1008 18:55:32.959987  506650 cache.go:56] Caching tarball of preloaded images
	I1008 18:55:32.960071  506650 preload.go:172] Found /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 18:55:32.960080  506650 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I1008 18:55:32.960196  506650 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/config.json ...
	I1008 18:55:32.960225  506650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/config.json: {Name:mked8c4bce3c069aa05ef0d191d61b6555fa39e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:55:32.960390  506650 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1008 18:55:32.990281  506650 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon, skipping pull
	I1008 18:55:32.990308  506650 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in daemon, skipping load
	I1008 18:55:32.990322  506650 cache.go:194] Successfully downloaded all kic artifacts
	I1008 18:55:32.990345  506650 start.go:360] acquireMachinesLock for embed-certs-423092: {Name:mkfb1cd62c5009168dec0db6759185108cdae850 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:55:32.990837  506650 start.go:364] duration metric: took 461.763µs to acquireMachinesLock for "embed-certs-423092"
	I1008 18:55:32.990880  506650 start.go:93] Provisioning new machine with config: &{Name:embed-certs-423092 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-423092 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1008 18:55:32.990962  506650 start.go:125] createHost starting for "" (driver="docker")
	I1008 18:55:31.850442  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:34.348659  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:32.994917  506650 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1008 18:55:32.995173  506650 start.go:159] libmachine.API.Create for "embed-certs-423092" (driver="docker")
	I1008 18:55:32.995210  506650 client.go:168] LocalClient.Create starting
	I1008 18:55:32.995281  506650 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem
	I1008 18:55:32.995327  506650 main.go:141] libmachine: Decoding PEM data...
	I1008 18:55:32.995346  506650 main.go:141] libmachine: Parsing certificate...
	I1008 18:55:32.995404  506650 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-283126/.minikube/certs/cert.pem
	I1008 18:55:32.995427  506650 main.go:141] libmachine: Decoding PEM data...
	I1008 18:55:32.995439  506650 main.go:141] libmachine: Parsing certificate...
	I1008 18:55:32.995816  506650 cli_runner.go:164] Run: docker network inspect embed-certs-423092 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 18:55:33.011835  506650 cli_runner.go:211] docker network inspect embed-certs-423092 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 18:55:33.011924  506650 network_create.go:284] running [docker network inspect embed-certs-423092] to gather additional debugging logs...
	I1008 18:55:33.011947  506650 cli_runner.go:164] Run: docker network inspect embed-certs-423092
	W1008 18:55:33.028717  506650 cli_runner.go:211] docker network inspect embed-certs-423092 returned with exit code 1
	I1008 18:55:33.028751  506650 network_create.go:287] error running [docker network inspect embed-certs-423092]: docker network inspect embed-certs-423092: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-423092 not found
	I1008 18:55:33.028773  506650 network_create.go:289] output of [docker network inspect embed-certs-423092]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-423092 not found
	
	** /stderr **
	I1008 18:55:33.028871  506650 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 18:55:33.048216  506650 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-83a053b44b9f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:28:14:5b:20} reservation:<nil>}
	I1008 18:55:33.048685  506650 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f60c83a7a53b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:2f:d7:31:1b} reservation:<nil>}
	I1008 18:55:33.049039  506650 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2d25d11f2c14 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:bc:ca:34:8e} reservation:<nil>}
	I1008 18:55:33.049430  506650 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d758cb0e1380 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:b0:1f:02:40} reservation:<nil>}
	I1008 18:55:33.050034  506650 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001983740}
	I1008 18:55:33.050066  506650 network_create.go:124] attempt to create docker network embed-certs-423092 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1008 18:55:33.050129  506650 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-423092 embed-certs-423092
	I1008 18:55:33.124834  506650 network_create.go:108] docker network embed-certs-423092 192.168.85.0/24 created
	I1008 18:55:33.124867  506650 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-423092" container
	I1008 18:55:33.124958  506650 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 18:55:33.141368  506650 cli_runner.go:164] Run: docker volume create embed-certs-423092 --label name.minikube.sigs.k8s.io=embed-certs-423092 --label created_by.minikube.sigs.k8s.io=true
	I1008 18:55:33.158353  506650 oci.go:103] Successfully created a docker volume embed-certs-423092
	I1008 18:55:33.158447  506650 cli_runner.go:164] Run: docker run --rm --name embed-certs-423092-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-423092 --entrypoint /usr/bin/test -v embed-certs-423092:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib
	I1008 18:55:33.808924  506650 oci.go:107] Successfully prepared a docker volume embed-certs-423092
	I1008 18:55:33.808966  506650 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1008 18:55:33.808987  506650 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 18:55:33.809068  506650 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-423092:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 18:55:36.350867  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:38.850850  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:38.634200  506650 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-423092:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir: (4.825087291s)
	I1008 18:55:38.634234  506650 kic.go:203] duration metric: took 4.825244816s to extract preloaded images to volume ...
	W1008 18:55:38.634364  506650 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1008 18:55:38.634474  506650 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 18:55:38.687612  506650 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-423092 --name embed-certs-423092 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-423092 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-423092 --network embed-certs-423092 --ip 192.168.85.2 --volume embed-certs-423092:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec
	I1008 18:55:39.028294  506650 cli_runner.go:164] Run: docker container inspect embed-certs-423092 --format={{.State.Running}}
	I1008 18:55:39.060205  506650 cli_runner.go:164] Run: docker container inspect embed-certs-423092 --format={{.State.Status}}
	I1008 18:55:39.083706  506650 cli_runner.go:164] Run: docker exec embed-certs-423092 stat /var/lib/dpkg/alternatives/iptables
	I1008 18:55:39.147933  506650 oci.go:144] the created container "embed-certs-423092" has a running status.
	I1008 18:55:39.147963  506650 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19774-283126/.minikube/machines/embed-certs-423092/id_rsa...
	I1008 18:55:39.781095  506650 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19774-283126/.minikube/machines/embed-certs-423092/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 18:55:39.802116  506650 cli_runner.go:164] Run: docker container inspect embed-certs-423092 --format={{.State.Status}}
	I1008 18:55:39.828032  506650 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 18:55:39.828051  506650 kic_runner.go:114] Args: [docker exec --privileged embed-certs-423092 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 18:55:39.900150  506650 cli_runner.go:164] Run: docker container inspect embed-certs-423092 --format={{.State.Status}}
	I1008 18:55:39.921349  506650 machine.go:93] provisionDockerMachine start ...
	I1008 18:55:39.921435  506650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-423092
	I1008 18:55:39.949846  506650 main.go:141] libmachine: Using SSH client type: native
	I1008 18:55:39.950128  506650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1008 18:55:39.950139  506650 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 18:55:40.109397  506650 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-423092
	
	I1008 18:55:40.109465  506650 ubuntu.go:169] provisioning hostname "embed-certs-423092"
	I1008 18:55:40.109592  506650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-423092
	I1008 18:55:40.130503  506650 main.go:141] libmachine: Using SSH client type: native
	I1008 18:55:40.130770  506650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1008 18:55:40.130782  506650 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-423092 && echo "embed-certs-423092" | sudo tee /etc/hostname
	I1008 18:55:40.294505  506650 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-423092
	
	I1008 18:55:40.294603  506650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-423092
	I1008 18:55:40.326020  506650 main.go:141] libmachine: Using SSH client type: native
	I1008 18:55:40.326259  506650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1008 18:55:40.326282  506650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-423092' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-423092/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-423092' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 18:55:40.483861  506650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 18:55:40.483892  506650 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19774-283126/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-283126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-283126/.minikube}
	I1008 18:55:40.483918  506650 ubuntu.go:177] setting up certificates
	I1008 18:55:40.483929  506650 provision.go:84] configureAuth start
	I1008 18:55:40.483993  506650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-423092
	I1008 18:55:40.508615  506650 provision.go:143] copyHostCerts
	I1008 18:55:40.508689  506650 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-283126/.minikube/ca.pem, removing ...
	I1008 18:55:40.508717  506650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-283126/.minikube/ca.pem
	I1008 18:55:40.508796  506650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-283126/.minikube/ca.pem (1078 bytes)
	I1008 18:55:40.508908  506650 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-283126/.minikube/cert.pem, removing ...
	I1008 18:55:40.508920  506650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-283126/.minikube/cert.pem
	I1008 18:55:40.508949  506650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-283126/.minikube/cert.pem (1123 bytes)
	I1008 18:55:40.509008  506650 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-283126/.minikube/key.pem, removing ...
	I1008 18:55:40.509018  506650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-283126/.minikube/key.pem
	I1008 18:55:40.509043  506650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-283126/.minikube/key.pem (1679 bytes)
	I1008 18:55:40.509100  506650 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-283126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca-key.pem org=jenkins.embed-certs-423092 san=[127.0.0.1 192.168.85.2 embed-certs-423092 localhost minikube]
	I1008 18:55:40.808327  506650 provision.go:177] copyRemoteCerts
	I1008 18:55:40.808398  506650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 18:55:40.808445  506650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-423092
	I1008 18:55:40.826394  506650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/embed-certs-423092/id_rsa Username:docker}
	I1008 18:55:40.923110  506650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 18:55:40.954923  506650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1008 18:55:40.980850  506650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 18:55:41.006405  506650 provision.go:87] duration metric: took 522.461398ms to configureAuth
	I1008 18:55:41.006433  506650 ubuntu.go:193] setting minikube options for container-runtime
	I1008 18:55:41.006627  506650 config.go:182] Loaded profile config "embed-certs-423092": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1008 18:55:41.006642  506650 machine.go:96] duration metric: took 1.085274844s to provisionDockerMachine
	I1008 18:55:41.006649  506650 client.go:171] duration metric: took 8.011428859s to LocalClient.Create
	I1008 18:55:41.006669  506650 start.go:167] duration metric: took 8.011497059s to libmachine.API.Create "embed-certs-423092"
	I1008 18:55:41.006682  506650 start.go:293] postStartSetup for "embed-certs-423092" (driver="docker")
	I1008 18:55:41.006692  506650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 18:55:41.006750  506650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 18:55:41.006797  506650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-423092
	I1008 18:55:41.023023  506650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/embed-certs-423092/id_rsa Username:docker}
	I1008 18:55:41.124753  506650 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 18:55:41.128098  506650 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 18:55:41.128134  506650 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1008 18:55:41.128146  506650 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1008 18:55:41.128153  506650 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1008 18:55:41.128164  506650 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-283126/.minikube/addons for local assets ...
	I1008 18:55:41.128222  506650 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-283126/.minikube/files for local assets ...
	I1008 18:55:41.128306  506650 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-283126/.minikube/files/etc/ssl/certs/2885412.pem -> 2885412.pem in /etc/ssl/certs
	I1008 18:55:41.128415  506650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 18:55:41.138336  506650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/files/etc/ssl/certs/2885412.pem --> /etc/ssl/certs/2885412.pem (1708 bytes)
	I1008 18:55:41.164333  506650 start.go:296] duration metric: took 157.637174ms for postStartSetup
	I1008 18:55:41.164732  506650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-423092
	I1008 18:55:41.182280  506650 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/config.json ...
	I1008 18:55:41.182582  506650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 18:55:41.182640  506650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-423092
	I1008 18:55:41.198723  506650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/embed-certs-423092/id_rsa Username:docker}
	I1008 18:55:41.290589  506650 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 18:55:41.295050  506650 start.go:128] duration metric: took 8.304071366s to createHost
	I1008 18:55:41.295075  506650 start.go:83] releasing machines lock for "embed-certs-423092", held for 8.304217035s
	I1008 18:55:41.295157  506650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-423092
	I1008 18:55:41.311946  506650 ssh_runner.go:195] Run: cat /version.json
	I1008 18:55:41.312002  506650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-423092
	I1008 18:55:41.312254  506650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 18:55:41.312328  506650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-423092
	I1008 18:55:41.340429  506650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/embed-certs-423092/id_rsa Username:docker}
	I1008 18:55:41.347088  506650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/embed-certs-423092/id_rsa Username:docker}
	I1008 18:55:41.587245  506650 ssh_runner.go:195] Run: systemctl --version
	I1008 18:55:41.591778  506650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1008 18:55:41.596558  506650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1008 18:55:41.624825  506650 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1008 18:55:41.624971  506650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 18:55:41.657805  506650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1008 18:55:41.657834  506650 start.go:495] detecting cgroup driver to use...
	I1008 18:55:41.657871  506650 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 18:55:41.657927  506650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1008 18:55:41.671454  506650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1008 18:55:41.682917  506650 docker.go:217] disabling cri-docker service (if available) ...
	I1008 18:55:41.682987  506650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 18:55:41.696343  506650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 18:55:41.711680  506650 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 18:55:41.800827  506650 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 18:55:41.897417  506650 docker.go:233] disabling docker service ...
	I1008 18:55:41.897505  506650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 18:55:41.919257  506650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 18:55:41.931604  506650 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 18:55:42.030905  506650 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 18:55:42.128534  506650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 18:55:42.141786  506650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 18:55:42.161960  506650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1008 18:55:42.174859  506650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1008 18:55:42.186565  506650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1008 18:55:42.186644  506650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1008 18:55:42.198795  506650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 18:55:42.210059  506650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1008 18:55:42.220888  506650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 18:55:42.232022  506650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 18:55:42.242627  506650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1008 18:55:42.257028  506650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1008 18:55:42.268042  506650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1008 18:55:42.282463  506650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 18:55:42.296549  506650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 18:55:42.307419  506650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:55:42.402720  506650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1008 18:55:42.550297  506650 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1008 18:55:42.550397  506650 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1008 18:55:42.553981  506650 start.go:563] Will wait 60s for crictl version
	I1008 18:55:42.554066  506650 ssh_runner.go:195] Run: which crictl
	I1008 18:55:42.557958  506650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 18:55:42.599647  506650 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1008 18:55:42.599753  506650 ssh_runner.go:195] Run: containerd --version
	I1008 18:55:42.622735  506650 ssh_runner.go:195] Run: containerd --version
	I1008 18:55:42.652207  506650 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I1008 18:55:42.654883  506650 cli_runner.go:164] Run: docker network inspect embed-certs-423092 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 18:55:42.670444  506650 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 18:55:42.674077  506650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 18:55:42.684751  506650 kubeadm.go:883] updating cluster {Name:embed-certs-423092 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-423092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 18:55:42.684875  506650 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1008 18:55:42.684945  506650 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:55:42.721176  506650 containerd.go:627] all images are preloaded for containerd runtime.
	I1008 18:55:42.721198  506650 containerd.go:534] Images already preloaded, skipping extraction
	I1008 18:55:42.721264  506650 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:55:42.758434  506650 containerd.go:627] all images are preloaded for containerd runtime.
	I1008 18:55:42.758458  506650 cache_images.go:84] Images are preloaded, skipping loading
	I1008 18:55:42.758469  506650 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 containerd true true} ...
	I1008 18:55:42.758569  506650 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-423092 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-423092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 18:55:42.758641  506650 ssh_runner.go:195] Run: sudo crictl info
	I1008 18:55:42.795225  506650 cni.go:84] Creating CNI manager for ""
	I1008 18:55:42.795246  506650 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1008 18:55:42.795255  506650 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 18:55:42.795278  506650 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-423092 NodeName:embed-certs-423092 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 18:55:42.795404  506650 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-423092"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 18:55:42.795470  506650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 18:55:42.804799  506650 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 18:55:42.804869  506650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 18:55:42.813664  506650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1008 18:55:42.832173  506650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 18:55:42.852244  506650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1008 18:55:42.870474  506650 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 18:55:42.874286  506650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 18:55:42.885413  506650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:55:42.978327  506650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:55:42.995238  506650 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092 for IP: 192.168.85.2
	I1008 18:55:42.995261  506650 certs.go:194] generating shared ca certs ...
	I1008 18:55:42.995276  506650 certs.go:226] acquiring lock for ca certs: {Name:mk9b4a4bb626944e2ef6352dc46232c13e820586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:55:42.995435  506650 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-283126/.minikube/ca.key
	I1008 18:55:42.995482  506650 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.key
	I1008 18:55:42.995493  506650 certs.go:256] generating profile certs ...
	I1008 18:55:42.995554  506650 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/client.key
	I1008 18:55:42.995577  506650 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/client.crt with IP's: []
	I1008 18:55:43.308568  506650 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/client.crt ...
	I1008 18:55:43.308600  506650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/client.crt: {Name:mkea3fb89f688541307da9ec6607748eba35f292 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:55:43.309359  506650 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/client.key ...
	I1008 18:55:43.309378  506650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/client.key: {Name:mk2a939341b604e3ded4b81a424dcbcb1e9a9481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:55:43.310036  506650 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/apiserver.key.aa090873
	I1008 18:55:43.310060  506650 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/apiserver.crt.aa090873 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1008 18:55:43.680168  506650 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/apiserver.crt.aa090873 ...
	I1008 18:55:43.680202  506650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/apiserver.crt.aa090873: {Name:mkd879f01088c2ba7e211ead82c5a3fcdb77e248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:55:43.681207  506650 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/apiserver.key.aa090873 ...
	I1008 18:55:43.681227  506650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/apiserver.key.aa090873: {Name:mkf240f6c5d12c3606485085a382336cb2be16d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:55:43.681973  506650 certs.go:381] copying /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/apiserver.crt.aa090873 -> /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/apiserver.crt
	I1008 18:55:43.682068  506650 certs.go:385] copying /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/apiserver.key.aa090873 -> /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/apiserver.key
	I1008 18:55:43.682130  506650 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/proxy-client.key
	I1008 18:55:43.682159  506650 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/proxy-client.crt with IP's: []
	I1008 18:55:44.009314  506650 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/proxy-client.crt ...
	I1008 18:55:44.009345  506650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/proxy-client.crt: {Name:mk0abb4824e3e573e4b740071e964f54c8f7c8e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:55:44.009562  506650 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/proxy-client.key ...
	I1008 18:55:44.009579  506650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/proxy-client.key: {Name:mk4fef265e6117c118a850199832dbcb4a3d6ce1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:55:44.010261  506650 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/288541.pem (1338 bytes)
	W1008 18:55:44.010315  506650 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-283126/.minikube/certs/288541_empty.pem, impossibly tiny 0 bytes
	I1008 18:55:44.010330  506650 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 18:55:44.010357  506650 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/ca.pem (1078 bytes)
	I1008 18:55:44.010387  506650 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/cert.pem (1123 bytes)
	I1008 18:55:44.010411  506650 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/certs/key.pem (1679 bytes)
	I1008 18:55:44.010460  506650 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-283126/.minikube/files/etc/ssl/certs/2885412.pem (1708 bytes)
	I1008 18:55:44.011134  506650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 18:55:44.039147  506650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 18:55:44.065265  506650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 18:55:44.090345  506650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1008 18:55:44.117412  506650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1008 18:55:44.142022  506650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 18:55:44.167141  506650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 18:55:44.193433  506650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/embed-certs-423092/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 18:55:44.219377  506650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/files/etc/ssl/certs/2885412.pem --> /usr/share/ca-certificates/2885412.pem (1708 bytes)
	I1008 18:55:44.244872  506650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 18:55:44.270142  506650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-283126/.minikube/certs/288541.pem --> /usr/share/ca-certificates/288541.pem (1338 bytes)
	I1008 18:55:44.296076  506650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 18:55:44.315177  506650 ssh_runner.go:195] Run: openssl version
	I1008 18:55:44.321327  506650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/288541.pem && ln -fs /usr/share/ca-certificates/288541.pem /etc/ssl/certs/288541.pem"
	I1008 18:55:44.331364  506650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/288541.pem
	I1008 18:55:44.334753  506650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 18:10 /usr/share/ca-certificates/288541.pem
	I1008 18:55:44.334823  506650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/288541.pem
	I1008 18:55:44.341867  506650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/288541.pem /etc/ssl/certs/51391683.0"
	I1008 18:55:44.353400  506650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2885412.pem && ln -fs /usr/share/ca-certificates/2885412.pem /etc/ssl/certs/2885412.pem"
	I1008 18:55:44.363527  506650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2885412.pem
	I1008 18:55:44.367156  506650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 18:10 /usr/share/ca-certificates/2885412.pem
	I1008 18:55:44.367233  506650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2885412.pem
	I1008 18:55:44.374252  506650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2885412.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 18:55:44.383970  506650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 18:55:44.393226  506650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:55:44.396846  506650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 18:02 /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:55:44.396918  506650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:55:44.407222  506650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 18:55:44.417053  506650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 18:55:44.420537  506650 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 18:55:44.420591  506650 kubeadm.go:392] StartCluster: {Name:embed-certs-423092 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-423092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:55:44.420684  506650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1008 18:55:44.420745  506650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 18:55:44.462339  506650 cri.go:89] found id: ""
	I1008 18:55:44.462456  506650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 18:55:44.471473  506650 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 18:55:44.480945  506650 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 18:55:44.481008  506650 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 18:55:44.489717  506650 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 18:55:44.489740  506650 kubeadm.go:157] found existing configuration files:
	
	I1008 18:55:44.489824  506650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 18:55:44.499052  506650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 18:55:44.499119  506650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 18:55:44.508217  506650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 18:55:44.517240  506650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 18:55:44.517328  506650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 18:55:44.525857  506650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 18:55:44.534874  506650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 18:55:44.534994  506650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 18:55:44.543723  506650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 18:55:44.552501  506650 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 18:55:44.552574  506650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 18:55:44.561083  506650 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 18:55:44.616943  506650 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1008 18:55:44.617361  506650 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 18:55:44.645729  506650 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1008 18:55:44.645977  506650 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1008 18:55:44.646046  506650 kubeadm.go:310] OS: Linux
	I1008 18:55:44.646130  506650 kubeadm.go:310] CGROUPS_CPU: enabled
	I1008 18:55:44.646205  506650 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1008 18:55:44.646290  506650 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1008 18:55:44.646366  506650 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1008 18:55:44.646446  506650 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1008 18:55:44.646529  506650 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1008 18:55:44.646605  506650 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1008 18:55:44.646677  506650 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1008 18:55:44.646757  506650 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1008 18:55:44.711367  506650 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 18:55:44.711537  506650 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 18:55:44.711659  506650 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 18:55:44.722849  506650 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 18:55:41.349508  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:43.351202  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:45.352641  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:44.728450  506650 out.go:235]   - Generating certificates and keys ...
	I1008 18:55:44.728652  506650 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 18:55:44.728763  506650 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 18:55:45.294451  506650 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 18:55:45.832872  506650 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1008 18:55:46.704449  506650 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1008 18:55:47.652818  506650 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1008 18:55:47.850485  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:49.851395  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:47.991647  506650 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1008 18:55:47.992009  506650 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-423092 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1008 18:55:48.308077  506650 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1008 18:55:48.308446  506650 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-423092 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1008 18:55:48.643954  506650 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 18:55:49.028627  506650 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 18:55:49.265440  506650 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1008 18:55:49.265767  506650 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 18:55:49.635327  506650 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 18:55:50.097464  506650 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 18:55:50.716414  506650 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 18:55:51.085417  506650 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 18:55:51.785179  506650 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 18:55:51.786000  506650 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 18:55:51.789110  506650 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 18:55:51.792163  506650 out.go:235]   - Booting up control plane ...
	I1008 18:55:51.792261  506650 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 18:55:51.792337  506650 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 18:55:51.792403  506650 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 18:55:51.811042  506650 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 18:55:51.818721  506650 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 18:55:51.818776  506650 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 18:55:51.919487  506650 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 18:55:51.919605  506650 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 18:55:52.349040  497176 pod_ready.go:103] pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace has status "Ready":"False"
	I1008 18:55:54.348425  497176 pod_ready.go:82] duration metric: took 4m0.006587025s for pod "metrics-server-9975d5f86-6czd4" in "kube-system" namespace to be "Ready" ...
	E1008 18:55:54.348455  497176 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1008 18:55:54.348465  497176 pod_ready.go:39] duration metric: took 5m23.635482712s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 18:55:54.348477  497176 api_server.go:52] waiting for apiserver process to appear ...
	I1008 18:55:54.348510  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1008 18:55:54.348572  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 18:55:54.426988  497176 cri.go:89] found id: "8b125c50b8db07d1a9e5baefe860fc69bcedae71866b8a3970f4a3af946deb47"
	I1008 18:55:54.427011  497176 cri.go:89] found id: "9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2"
	I1008 18:55:54.427016  497176 cri.go:89] found id: ""
	I1008 18:55:54.427023  497176 logs.go:282] 2 containers: [8b125c50b8db07d1a9e5baefe860fc69bcedae71866b8a3970f4a3af946deb47 9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2]
	I1008 18:55:54.427095  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.434289  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.438042  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1008 18:55:54.438116  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 18:55:54.534104  497176 cri.go:89] found id: "67a2f779038d1fc2e1c2a1a0dd241b17fbbf68d3c1ff9aabf2f025cb27cde4ec"
	I1008 18:55:54.534131  497176 cri.go:89] found id: "b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3"
	I1008 18:55:54.534137  497176 cri.go:89] found id: ""
	I1008 18:55:54.534144  497176 logs.go:282] 2 containers: [67a2f779038d1fc2e1c2a1a0dd241b17fbbf68d3c1ff9aabf2f025cb27cde4ec b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3]
	I1008 18:55:54.534203  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.538467  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.542190  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1008 18:55:54.542266  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 18:55:54.608636  497176 cri.go:89] found id: "c84a70362114befb3c291913d2dccae01be8cebac20b41f21c12a7b8b2a49579"
	I1008 18:55:54.608662  497176 cri.go:89] found id: "d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e"
	I1008 18:55:54.608668  497176 cri.go:89] found id: ""
	I1008 18:55:54.608675  497176 logs.go:282] 2 containers: [c84a70362114befb3c291913d2dccae01be8cebac20b41f21c12a7b8b2a49579 d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e]
	I1008 18:55:54.608733  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.612269  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.617761  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1008 18:55:54.617831  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 18:55:54.682939  497176 cri.go:89] found id: "3eaac32bdc9ebfcf342b5d3049e7d5c933d8cee0ffc2a3b347619cee2bab6650"
	I1008 18:55:54.682965  497176 cri.go:89] found id: "22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e"
	I1008 18:55:54.682969  497176 cri.go:89] found id: ""
	I1008 18:55:54.682977  497176 logs.go:282] 2 containers: [3eaac32bdc9ebfcf342b5d3049e7d5c933d8cee0ffc2a3b347619cee2bab6650 22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e]
	I1008 18:55:54.683030  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.690302  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.694724  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1008 18:55:54.694807  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 18:55:54.764434  497176 cri.go:89] found id: "5bc796ead04f026f0fab0416d7431839d26474c4f270223932ec7177ff050e29"
	I1008 18:55:54.764461  497176 cri.go:89] found id: "3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52"
	I1008 18:55:54.764472  497176 cri.go:89] found id: ""
	I1008 18:55:54.764478  497176 logs.go:282] 2 containers: [5bc796ead04f026f0fab0416d7431839d26474c4f270223932ec7177ff050e29 3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52]
	I1008 18:55:54.764549  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.774003  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.784426  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 18:55:54.784510  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 18:55:54.868383  497176 cri.go:89] found id: "d454ac385124a434ff99a3a89b796978a0925f0d09f2b44fc3e895b7402a54de"
	I1008 18:55:54.868409  497176 cri.go:89] found id: "08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9"
	I1008 18:55:54.868415  497176 cri.go:89] found id: ""
	I1008 18:55:54.868423  497176 logs.go:282] 2 containers: [d454ac385124a434ff99a3a89b796978a0925f0d09f2b44fc3e895b7402a54de 08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9]
	I1008 18:55:54.868478  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.872165  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.875415  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1008 18:55:54.875492  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 18:55:54.983204  497176 cri.go:89] found id: "afe755f054498e79e1505269d3e51e49df21447fc15ed828fde47cfb3eb20b15"
	I1008 18:55:54.983223  497176 cri.go:89] found id: "feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51"
	I1008 18:55:54.983235  497176 cri.go:89] found id: ""
	I1008 18:55:54.983243  497176 logs.go:282] 2 containers: [afe755f054498e79e1505269d3e51e49df21447fc15ed828fde47cfb3eb20b15 feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51]
	I1008 18:55:54.983298  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.994273  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:54.998172  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 18:55:54.998243  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 18:55:55.064003  497176 cri.go:89] found id: "4730beee8eb17ce1d3b22a81dc0769170618efae5265a4a18e70cbbc4b8c4bff"
	I1008 18:55:55.064079  497176 cri.go:89] found id: ""
	I1008 18:55:55.064102  497176 logs.go:282] 1 containers: [4730beee8eb17ce1d3b22a81dc0769170618efae5265a4a18e70cbbc4b8c4bff]
	I1008 18:55:55.064192  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:55.068413  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1008 18:55:55.068537  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 18:55:55.125442  497176 cri.go:89] found id: "54bd3f1bc6af7c9dea9e44663dba92abed55601cd24da5f4327db49bd763fec7"
	I1008 18:55:55.125527  497176 cri.go:89] found id: "b59eb35fd652d4e3f35621866ce9f731d776c0389f2931af23b8aa380fa850a1"
	I1008 18:55:55.125547  497176 cri.go:89] found id: ""
	I1008 18:55:55.125571  497176 logs.go:282] 2 containers: [54bd3f1bc6af7c9dea9e44663dba92abed55601cd24da5f4327db49bd763fec7 b59eb35fd652d4e3f35621866ce9f731d776c0389f2931af23b8aa380fa850a1]
	I1008 18:55:55.125656  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:55.129377  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:55:55.132685  497176 logs.go:123] Gathering logs for kubelet ...
	I1008 18:55:55.132749  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 18:55:55.204498  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.728573     661 reflector.go:138] object-"kube-system"/"kindnet-token-5g4mc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-5g4mc" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.204731  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.729238     661 reflector.go:138] object-"kube-system"/"coredns-token-zcdnl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-zcdnl" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.204951  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.729407     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-szd4x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-szd4x" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.205182  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.729538     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-w5946": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-w5946" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.205402  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.729716     661 reflector.go:138] object-"kube-system"/"metrics-server-token-x2kc9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-x2kc9" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.205606  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.748500     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.205912  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.748741     661 reflector.go:138] object-"default"/"default-token-l5v6w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-l5v6w" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.206141  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.749595     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.215664  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:33 old-k8s-version-265388 kubelet[661]: E1008 18:50:33.549244     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:55:55.215870  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:34 old-k8s-version-265388 kubelet[661]: E1008 18:50:34.447550     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.218711  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:47 old-k8s-version-265388 kubelet[661]: E1008 18:50:47.276856     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:55:55.219167  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:47 old-k8s-version-265388 kubelet[661]: E1008 18:50:47.506612     661 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-pqd5j": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-pqd5j" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:55:55.220741  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:58 old-k8s-version-265388 kubelet[661]: E1008 18:50:58.301882     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.221540  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:02 old-k8s-version-265388 kubelet[661]: E1008 18:51:02.574007     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.222010  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:03 old-k8s-version-265388 kubelet[661]: E1008 18:51:03.579128     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.222445  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:04 old-k8s-version-265388 kubelet[661]: E1008 18:51:04.583788     661 pod_workers.go:191] Error syncing pod 26175fac-5bc1-416f-b866-36430292c80d ("storage-provisioner_kube-system(26175fac-5bc1-416f-b866-36430292c80d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(26175fac-5bc1-416f-b866-36430292c80d)"
	W1008 18:55:55.222767  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:08 old-k8s-version-265388 kubelet[661]: E1008 18:51:08.571898     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.225526  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:13 old-k8s-version-265388 kubelet[661]: E1008 18:51:13.298924     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:55:55.226312  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:21 old-k8s-version-265388 kubelet[661]: E1008 18:51:21.661055     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.226521  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:25 old-k8s-version-265388 kubelet[661]: E1008 18:51:25.268458     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.226866  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:28 old-k8s-version-265388 kubelet[661]: E1008 18:51:28.571883     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.227217  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:39 old-k8s-version-265388 kubelet[661]: E1008 18:51:39.268329     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.227431  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:39 old-k8s-version-265388 kubelet[661]: E1008 18:51:39.273781     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.228098  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:51 old-k8s-version-265388 kubelet[661]: E1008 18:51:51.739290     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.228305  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:52 old-k8s-version-265388 kubelet[661]: E1008 18:51:52.268075     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.228654  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:58 old-k8s-version-265388 kubelet[661]: E1008 18:51:58.572095     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.231131  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:06 old-k8s-version-265388 kubelet[661]: E1008 18:52:06.276619     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:55:55.231493  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:12 old-k8s-version-265388 kubelet[661]: E1008 18:52:12.267768     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.231702  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:21 old-k8s-version-265388 kubelet[661]: E1008 18:52:21.271213     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.232057  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:24 old-k8s-version-265388 kubelet[661]: E1008 18:52:24.268448     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.232267  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:35 old-k8s-version-265388 kubelet[661]: E1008 18:52:35.268779     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.232880  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:37 old-k8s-version-265388 kubelet[661]: E1008 18:52:37.854127     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.233241  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:38 old-k8s-version-265388 kubelet[661]: E1008 18:52:38.857799     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.233483  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:48 old-k8s-version-265388 kubelet[661]: E1008 18:52:48.268171     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.233848  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:50 old-k8s-version-265388 kubelet[661]: E1008 18:52:50.268221     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.234080  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:59 old-k8s-version-265388 kubelet[661]: E1008 18:52:59.268518     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.234427  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:02 old-k8s-version-265388 kubelet[661]: E1008 18:53:02.267840     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.234631  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:12 old-k8s-version-265388 kubelet[661]: E1008 18:53:12.268232     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.241152  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:13 old-k8s-version-265388 kubelet[661]: E1008 18:53:13.267801     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.241395  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:25 old-k8s-version-265388 kubelet[661]: E1008 18:53:25.268175     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.241755  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:27 old-k8s-version-265388 kubelet[661]: E1008 18:53:27.267995     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.244233  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:39 old-k8s-version-265388 kubelet[661]: E1008 18:53:39.279009     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:55:55.244583  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:42 old-k8s-version-265388 kubelet[661]: E1008 18:53:42.267851     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.244792  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:51 old-k8s-version-265388 kubelet[661]: E1008 18:53:51.268831     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.245167  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:56 old-k8s-version-265388 kubelet[661]: E1008 18:53:56.267856     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.245376  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:06 old-k8s-version-265388 kubelet[661]: E1008 18:54:06.268365     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.246041  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:11 old-k8s-version-265388 kubelet[661]: E1008 18:54:11.101872     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.246407  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:18 old-k8s-version-265388 kubelet[661]: E1008 18:54:18.571932     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.246614  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:21 old-k8s-version-265388 kubelet[661]: E1008 18:54:21.268711     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.246962  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:29 old-k8s-version-265388 kubelet[661]: E1008 18:54:29.268265     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.247172  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:36 old-k8s-version-265388 kubelet[661]: E1008 18:54:36.268342     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.247535  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:41 old-k8s-version-265388 kubelet[661]: E1008 18:54:41.268698     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.247748  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:49 old-k8s-version-265388 kubelet[661]: E1008 18:54:49.268273     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.248094  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:53 old-k8s-version-265388 kubelet[661]: E1008 18:54:53.268289     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.248302  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:03 old-k8s-version-265388 kubelet[661]: E1008 18:55:03.269930     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.248655  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:06 old-k8s-version-265388 kubelet[661]: E1008 18:55:06.268627     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.248867  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:14 old-k8s-version-265388 kubelet[661]: E1008 18:55:14.268646     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.249222  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:17 old-k8s-version-265388 kubelet[661]: E1008 18:55:17.269172     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.249597  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:29 old-k8s-version-265388 kubelet[661]: E1008 18:55:29.269526     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.249804  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:29 old-k8s-version-265388 kubelet[661]: E1008 18:55:29.273975     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.249987  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:40 old-k8s-version-265388 kubelet[661]: E1008 18:55:40.268227     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:55.250313  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:41 old-k8s-version-265388 kubelet[661]: E1008 18:55:41.268178     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:55.250634  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:53 old-k8s-version-265388 kubelet[661]: E1008 18:55:53.270959     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	I1008 18:55:55.250643  497176 logs.go:123] Gathering logs for kube-apiserver [8b125c50b8db07d1a9e5baefe860fc69bcedae71866b8a3970f4a3af946deb47] ...
	I1008 18:55:55.250658  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b125c50b8db07d1a9e5baefe860fc69bcedae71866b8a3970f4a3af946deb47"
	I1008 18:55:55.352772  497176 logs.go:123] Gathering logs for etcd [67a2f779038d1fc2e1c2a1a0dd241b17fbbf68d3c1ff9aabf2f025cb27cde4ec] ...
	I1008 18:55:55.352807  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67a2f779038d1fc2e1c2a1a0dd241b17fbbf68d3c1ff9aabf2f025cb27cde4ec"
	I1008 18:55:55.444919  497176 logs.go:123] Gathering logs for etcd [b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3] ...
	I1008 18:55:55.444994  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3"
	I1008 18:55:55.519917  497176 logs.go:123] Gathering logs for kindnet [afe755f054498e79e1505269d3e51e49df21447fc15ed828fde47cfb3eb20b15] ...
	I1008 18:55:55.520149  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afe755f054498e79e1505269d3e51e49df21447fc15ed828fde47cfb3eb20b15"
	I1008 18:55:55.587556  497176 logs.go:123] Gathering logs for container status ...
	I1008 18:55:55.587627  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 18:55:55.665350  497176 logs.go:123] Gathering logs for dmesg ...
	I1008 18:55:55.665378  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 18:55:55.697341  497176 logs.go:123] Gathering logs for kube-apiserver [9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2] ...
	I1008 18:55:55.697378  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2"
	I1008 18:55:55.803526  497176 logs.go:123] Gathering logs for kube-proxy [3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52] ...
	I1008 18:55:55.803559  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52"
	I1008 18:55:55.880986  497176 logs.go:123] Gathering logs for kube-controller-manager [d454ac385124a434ff99a3a89b796978a0925f0d09f2b44fc3e895b7402a54de] ...
	I1008 18:55:55.881062  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d454ac385124a434ff99a3a89b796978a0925f0d09f2b44fc3e895b7402a54de"
	I1008 18:55:56.010386  497176 logs.go:123] Gathering logs for kindnet [feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51] ...
	I1008 18:55:56.010423  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51"
	I1008 18:55:56.082261  497176 logs.go:123] Gathering logs for containerd ...
	I1008 18:55:56.082295  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1008 18:55:56.159479  497176 logs.go:123] Gathering logs for describe nodes ...
	I1008 18:55:56.159556  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 18:55:52.920087  506650 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000883379s
	I1008 18:55:52.920175  506650 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1008 18:56:00.921542  506650 kubeadm.go:310] [api-check] The API server is healthy after 8.001425397s
	I1008 18:56:00.952078  506650 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 18:56:00.968085  506650 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 18:56:00.996167  506650 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 18:56:00.996377  506650 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-423092 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 18:56:01.009873  506650 kubeadm.go:310] [bootstrap-token] Using token: rp1fev.wtrpoerwtkvg2fer
	I1008 18:55:56.420743  497176 logs.go:123] Gathering logs for kube-scheduler [3eaac32bdc9ebfcf342b5d3049e7d5c933d8cee0ffc2a3b347619cee2bab6650] ...
	I1008 18:55:56.420775  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eaac32bdc9ebfcf342b5d3049e7d5c933d8cee0ffc2a3b347619cee2bab6650"
	I1008 18:55:56.494207  497176 logs.go:123] Gathering logs for kube-scheduler [22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e] ...
	I1008 18:55:56.494235  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e"
	I1008 18:55:56.559318  497176 logs.go:123] Gathering logs for kube-proxy [5bc796ead04f026f0fab0416d7431839d26474c4f270223932ec7177ff050e29] ...
	I1008 18:55:56.559390  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bc796ead04f026f0fab0416d7431839d26474c4f270223932ec7177ff050e29"
	I1008 18:55:56.641366  497176 logs.go:123] Gathering logs for kube-controller-manager [08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9] ...
	I1008 18:55:56.641439  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9"
	I1008 18:55:56.740485  497176 logs.go:123] Gathering logs for storage-provisioner [b59eb35fd652d4e3f35621866ce9f731d776c0389f2931af23b8aa380fa850a1] ...
	I1008 18:55:56.740568  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b59eb35fd652d4e3f35621866ce9f731d776c0389f2931af23b8aa380fa850a1"
	I1008 18:55:56.808321  497176 logs.go:123] Gathering logs for coredns [c84a70362114befb3c291913d2dccae01be8cebac20b41f21c12a7b8b2a49579] ...
	I1008 18:55:56.808349  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c84a70362114befb3c291913d2dccae01be8cebac20b41f21c12a7b8b2a49579"
	I1008 18:55:56.861793  497176 logs.go:123] Gathering logs for coredns [d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e] ...
	I1008 18:55:56.861864  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e"
	I1008 18:55:56.926327  497176 logs.go:123] Gathering logs for kubernetes-dashboard [4730beee8eb17ce1d3b22a81dc0769170618efae5265a4a18e70cbbc4b8c4bff] ...
	I1008 18:55:56.926405  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4730beee8eb17ce1d3b22a81dc0769170618efae5265a4a18e70cbbc4b8c4bff"
	I1008 18:55:57.010485  497176 logs.go:123] Gathering logs for storage-provisioner [54bd3f1bc6af7c9dea9e44663dba92abed55601cd24da5f4327db49bd763fec7] ...
	I1008 18:55:57.010570  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54bd3f1bc6af7c9dea9e44663dba92abed55601cd24da5f4327db49bd763fec7"
	I1008 18:55:57.080247  497176 out.go:358] Setting ErrFile to fd 2...
	I1008 18:55:57.080318  497176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 18:55:57.080398  497176 out.go:270] X Problems detected in kubelet:
	W1008 18:55:57.080444  497176 out.go:270]   Oct 08 18:55:29 old-k8s-version-265388 kubelet[661]: E1008 18:55:29.269526     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:57.080479  497176 out.go:270]   Oct 08 18:55:29 old-k8s-version-265388 kubelet[661]: E1008 18:55:29.273975     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:57.080535  497176 out.go:270]   Oct 08 18:55:40 old-k8s-version-265388 kubelet[661]: E1008 18:55:40.268227     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:55:57.080615  497176 out.go:270]   Oct 08 18:55:41 old-k8s-version-265388 kubelet[661]: E1008 18:55:41.268178     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:55:57.080661  497176 out.go:270]   Oct 08 18:55:53 old-k8s-version-265388 kubelet[661]: E1008 18:55:53.270959     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	I1008 18:55:57.080697  497176 out.go:358] Setting ErrFile to fd 2...
	I1008 18:55:57.080759  497176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:56:01.012576  506650 out.go:235]   - Configuring RBAC rules ...
	I1008 18:56:01.012710  506650 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 18:56:01.017523  506650 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 18:56:01.028934  506650 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 18:56:01.032939  506650 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 18:56:01.037015  506650 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 18:56:01.041025  506650 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 18:56:01.328642  506650 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 18:56:01.763146  506650 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 18:56:02.329447  506650 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 18:56:02.331054  506650 kubeadm.go:310] 
	I1008 18:56:02.331128  506650 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 18:56:02.331142  506650 kubeadm.go:310] 
	I1008 18:56:02.331220  506650 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 18:56:02.331229  506650 kubeadm.go:310] 
	I1008 18:56:02.331255  506650 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 18:56:02.331317  506650 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 18:56:02.331371  506650 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 18:56:02.331379  506650 kubeadm.go:310] 
	I1008 18:56:02.331433  506650 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 18:56:02.331441  506650 kubeadm.go:310] 
	I1008 18:56:02.331488  506650 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 18:56:02.331497  506650 kubeadm.go:310] 
	I1008 18:56:02.331565  506650 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 18:56:02.331643  506650 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 18:56:02.331715  506650 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 18:56:02.331727  506650 kubeadm.go:310] 
	I1008 18:56:02.331814  506650 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 18:56:02.331894  506650 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 18:56:02.331904  506650 kubeadm.go:310] 
	I1008 18:56:02.331991  506650 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rp1fev.wtrpoerwtkvg2fer \
	I1008 18:56:02.332097  506650 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b97bb3e8e417b962d820ebc093937d5128e022499abe774f12128a2d4bef5329 \
	I1008 18:56:02.332121  506650 kubeadm.go:310] 	--control-plane 
	I1008 18:56:02.332130  506650 kubeadm.go:310] 
	I1008 18:56:02.332214  506650 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 18:56:02.332222  506650 kubeadm.go:310] 
	I1008 18:56:02.332306  506650 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rp1fev.wtrpoerwtkvg2fer \
	I1008 18:56:02.332422  506650 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b97bb3e8e417b962d820ebc093937d5128e022499abe774f12128a2d4bef5329 
	I1008 18:56:02.336989  506650 kubeadm.go:310] W1008 18:55:44.613234    1042 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 18:56:02.337294  506650 kubeadm.go:310] W1008 18:55:44.614449    1042 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 18:56:02.337517  506650 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1008 18:56:02.337621  506650 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 18:56:02.337637  506650 cni.go:84] Creating CNI manager for ""
	I1008 18:56:02.337645  506650 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1008 18:56:02.340634  506650 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1008 18:56:02.343324  506650 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 18:56:02.347466  506650 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1008 18:56:02.347487  506650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 18:56:02.369567  506650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 18:56:02.680490  506650 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 18:56:02.680623  506650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:56:02.680699  506650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-423092 minikube.k8s.io/updated_at=2024_10_08T18_56_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=embed-certs-423092 minikube.k8s.io/primary=true
	I1008 18:56:02.894943  506650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:56:02.895077  506650 ops.go:34] apiserver oom_adj: -16
	I1008 18:56:03.395072  506650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:56:03.895578  506650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:56:04.395610  506650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:56:04.895297  506650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:56:05.395316  506650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:56:05.895526  506650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:56:06.395776  506650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:56:06.895945  506650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 18:56:07.031996  506650 kubeadm.go:1113] duration metric: took 4.351418188s to wait for elevateKubeSystemPrivileges
	I1008 18:56:07.032025  506650 kubeadm.go:394] duration metric: took 22.611439703s to StartCluster
	I1008 18:56:07.032043  506650 settings.go:142] acquiring lock: {Name:mk88999f347ab2e93b53f54a6e8df12c27df7c8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:56:07.032108  506650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-283126/kubeconfig
	I1008 18:56:07.034762  506650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/kubeconfig: {Name:mkc40596aa3771ba8a6c8897a16b531991d7bc01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:56:07.036143  506650 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1008 18:56:07.036682  506650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 18:56:07.036874  506650 config.go:182] Loaded profile config "embed-certs-423092": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1008 18:56:07.036906  506650 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 18:56:07.037067  506650 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-423092"
	I1008 18:56:07.037089  506650 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-423092"
	I1008 18:56:07.037115  506650 host.go:66] Checking if "embed-certs-423092" exists ...
	I1008 18:56:07.037863  506650 cli_runner.go:164] Run: docker container inspect embed-certs-423092 --format={{.State.Status}}
	I1008 18:56:07.040238  506650 addons.go:69] Setting default-storageclass=true in profile "embed-certs-423092"
	I1008 18:56:07.040280  506650 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-423092"
	I1008 18:56:07.040758  506650 cli_runner.go:164] Run: docker container inspect embed-certs-423092 --format={{.State.Status}}
	I1008 18:56:07.043164  506650 out.go:177] * Verifying Kubernetes components...
	I1008 18:56:07.052498  506650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:56:07.075613  506650 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 18:56:07.079143  506650 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 18:56:07.079169  506650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 18:56:07.079233  506650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-423092
	I1008 18:56:07.089153  506650 addons.go:234] Setting addon default-storageclass=true in "embed-certs-423092"
	I1008 18:56:07.089198  506650 host.go:66] Checking if "embed-certs-423092" exists ...
	I1008 18:56:07.089654  506650 cli_runner.go:164] Run: docker container inspect embed-certs-423092 --format={{.State.Status}}
	I1008 18:56:07.124629  506650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/embed-certs-423092/id_rsa Username:docker}
	I1008 18:56:07.133901  506650 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 18:56:07.133922  506650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 18:56:07.133985  506650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-423092
	I1008 18:56:07.162623  506650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/embed-certs-423092/id_rsa Username:docker}
	I1008 18:56:07.454369  506650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 18:56:07.454487  506650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:56:07.483819  506650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 18:56:07.558372  506650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 18:56:08.670485  506650 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.215969978s)
	I1008 18:56:08.671737  506650 node_ready.go:35] waiting up to 6m0s for node "embed-certs-423092" to be "Ready" ...
	I1008 18:56:08.672034  506650 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.217635849s)
	I1008 18:56:08.672058  506650 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1008 18:56:08.673403  506650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.189556035s)
	I1008 18:56:08.734796  506650 node_ready.go:49] node "embed-certs-423092" has status "Ready":"True"
	I1008 18:56:08.734824  506650 node_ready.go:38] duration metric: took 63.062939ms for node "embed-certs-423092" to be "Ready" ...
	I1008 18:56:08.734834  506650 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 18:56:08.772508  506650 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4vbhx" in "kube-system" namespace to be "Ready" ...
	I1008 18:56:09.110046  506650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.55159529s)
	I1008 18:56:09.112957  506650 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1008 18:56:07.086382  497176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 18:56:07.115541  497176 api_server.go:72] duration metric: took 5m53.774286827s to wait for apiserver process to appear ...
	I1008 18:56:07.115566  497176 api_server.go:88] waiting for apiserver healthz status ...
	I1008 18:56:07.115616  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1008 18:56:07.115669  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 18:56:07.197609  497176 cri.go:89] found id: "8b125c50b8db07d1a9e5baefe860fc69bcedae71866b8a3970f4a3af946deb47"
	I1008 18:56:07.197630  497176 cri.go:89] found id: "9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2"
	I1008 18:56:07.197636  497176 cri.go:89] found id: ""
	I1008 18:56:07.197643  497176 logs.go:282] 2 containers: [8b125c50b8db07d1a9e5baefe860fc69bcedae71866b8a3970f4a3af946deb47 9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2]
	I1008 18:56:07.197819  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.202055  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.210623  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1008 18:56:07.210693  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 18:56:07.290879  497176 cri.go:89] found id: "67a2f779038d1fc2e1c2a1a0dd241b17fbbf68d3c1ff9aabf2f025cb27cde4ec"
	I1008 18:56:07.290905  497176 cri.go:89] found id: "b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3"
	I1008 18:56:07.290910  497176 cri.go:89] found id: ""
	I1008 18:56:07.290917  497176 logs.go:282] 2 containers: [67a2f779038d1fc2e1c2a1a0dd241b17fbbf68d3c1ff9aabf2f025cb27cde4ec b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3]
	I1008 18:56:07.290971  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.298486  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.305409  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1008 18:56:07.305487  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 18:56:07.379981  497176 cri.go:89] found id: "c84a70362114befb3c291913d2dccae01be8cebac20b41f21c12a7b8b2a49579"
	I1008 18:56:07.380001  497176 cri.go:89] found id: "d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e"
	I1008 18:56:07.380005  497176 cri.go:89] found id: ""
	I1008 18:56:07.380013  497176 logs.go:282] 2 containers: [c84a70362114befb3c291913d2dccae01be8cebac20b41f21c12a7b8b2a49579 d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e]
	I1008 18:56:07.380074  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.384702  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.388889  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1008 18:56:07.388954  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 18:56:07.498582  497176 cri.go:89] found id: "3eaac32bdc9ebfcf342b5d3049e7d5c933d8cee0ffc2a3b347619cee2bab6650"
	I1008 18:56:07.498602  497176 cri.go:89] found id: "22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e"
	I1008 18:56:07.498606  497176 cri.go:89] found id: ""
	I1008 18:56:07.498614  497176 logs.go:282] 2 containers: [3eaac32bdc9ebfcf342b5d3049e7d5c933d8cee0ffc2a3b347619cee2bab6650 22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e]
	I1008 18:56:07.498668  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.504769  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.509607  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1008 18:56:07.509775  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 18:56:07.581606  497176 cri.go:89] found id: "5bc796ead04f026f0fab0416d7431839d26474c4f270223932ec7177ff050e29"
	I1008 18:56:07.581735  497176 cri.go:89] found id: "3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52"
	I1008 18:56:07.581781  497176 cri.go:89] found id: ""
	I1008 18:56:07.581805  497176 logs.go:282] 2 containers: [5bc796ead04f026f0fab0416d7431839d26474c4f270223932ec7177ff050e29 3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52]
	I1008 18:56:07.581895  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.586372  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.590691  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 18:56:07.590838  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 18:56:07.666361  497176 cri.go:89] found id: "d454ac385124a434ff99a3a89b796978a0925f0d09f2b44fc3e895b7402a54de"
	I1008 18:56:07.666435  497176 cri.go:89] found id: "08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9"
	I1008 18:56:07.666454  497176 cri.go:89] found id: ""
	I1008 18:56:07.666479  497176 logs.go:282] 2 containers: [d454ac385124a434ff99a3a89b796978a0925f0d09f2b44fc3e895b7402a54de 08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9]
	I1008 18:56:07.666568  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.673975  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.679696  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1008 18:56:07.679825  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 18:56:07.751196  497176 cri.go:89] found id: "afe755f054498e79e1505269d3e51e49df21447fc15ed828fde47cfb3eb20b15"
	I1008 18:56:07.751257  497176 cri.go:89] found id: "feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51"
	I1008 18:56:07.751285  497176 cri.go:89] found id: ""
	I1008 18:56:07.751306  497176 logs.go:282] 2 containers: [afe755f054498e79e1505269d3e51e49df21447fc15ed828fde47cfb3eb20b15 feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51]
	I1008 18:56:07.751392  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.757010  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.762153  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 18:56:07.762270  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 18:56:07.859980  497176 cri.go:89] found id: "4730beee8eb17ce1d3b22a81dc0769170618efae5265a4a18e70cbbc4b8c4bff"
	I1008 18:56:07.860049  497176 cri.go:89] found id: ""
	I1008 18:56:07.860073  497176 logs.go:282] 1 containers: [4730beee8eb17ce1d3b22a81dc0769170618efae5265a4a18e70cbbc4b8c4bff]
	I1008 18:56:07.860157  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.865322  497176 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1008 18:56:07.865453  497176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 18:56:07.974577  497176 cri.go:89] found id: "54bd3f1bc6af7c9dea9e44663dba92abed55601cd24da5f4327db49bd763fec7"
	I1008 18:56:07.974649  497176 cri.go:89] found id: "b59eb35fd652d4e3f35621866ce9f731d776c0389f2931af23b8aa380fa850a1"
	I1008 18:56:07.974677  497176 cri.go:89] found id: ""
	I1008 18:56:07.974700  497176 logs.go:282] 2 containers: [54bd3f1bc6af7c9dea9e44663dba92abed55601cd24da5f4327db49bd763fec7 b59eb35fd652d4e3f35621866ce9f731d776c0389f2931af23b8aa380fa850a1]
	I1008 18:56:07.974785  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.982927  497176 ssh_runner.go:195] Run: which crictl
	I1008 18:56:07.988764  497176 logs.go:123] Gathering logs for describe nodes ...
	I1008 18:56:07.988838  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 18:56:08.241929  497176 logs.go:123] Gathering logs for kindnet [feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51] ...
	I1008 18:56:08.241964  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51"
	I1008 18:56:08.305129  497176 logs.go:123] Gathering logs for kube-controller-manager [d454ac385124a434ff99a3a89b796978a0925f0d09f2b44fc3e895b7402a54de] ...
	I1008 18:56:08.305159  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d454ac385124a434ff99a3a89b796978a0925f0d09f2b44fc3e895b7402a54de"
	I1008 18:56:08.435945  497176 logs.go:123] Gathering logs for kindnet [afe755f054498e79e1505269d3e51e49df21447fc15ed828fde47cfb3eb20b15] ...
	I1008 18:56:08.436018  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afe755f054498e79e1505269d3e51e49df21447fc15ed828fde47cfb3eb20b15"
	I1008 18:56:08.516643  497176 logs.go:123] Gathering logs for kubelet ...
	I1008 18:56:08.516717  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 18:56:08.588108  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.728573     661 reflector.go:138] object-"kube-system"/"kindnet-token-5g4mc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-5g4mc" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.588437  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.729238     661 reflector.go:138] object-"kube-system"/"coredns-token-zcdnl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-zcdnl" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.590014  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.729407     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-szd4x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-szd4x" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.590300  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.729538     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-w5946": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-w5946" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.590624  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.729716     661 reflector.go:138] object-"kube-system"/"metrics-server-token-x2kc9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-x2kc9" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.590888  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.748500     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.591137  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.748741     661 reflector.go:138] object-"default"/"default-token-l5v6w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-l5v6w" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.591381  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:30 old-k8s-version-265388 kubelet[661]: E1008 18:50:30.749595     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.607949  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:33 old-k8s-version-265388 kubelet[661]: E1008 18:50:33.549244     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:56:08.608472  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:34 old-k8s-version-265388 kubelet[661]: E1008 18:50:34.447550     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.613378  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:47 old-k8s-version-265388 kubelet[661]: E1008 18:50:47.276856     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:56:08.614298  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:47 old-k8s-version-265388 kubelet[661]: E1008 18:50:47.506612     661 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-pqd5j": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-pqd5j" is forbidden: User "system:node:old-k8s-version-265388" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-265388' and this object
	W1008 18:56:08.616109  497176 logs.go:138] Found kubelet problem: Oct 08 18:50:58 old-k8s-version-265388 kubelet[661]: E1008 18:50:58.301882     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.617085  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:02 old-k8s-version-265388 kubelet[661]: E1008 18:51:02.574007     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.617689  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:03 old-k8s-version-265388 kubelet[661]: E1008 18:51:03.579128     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.622315  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:04 old-k8s-version-265388 kubelet[661]: E1008 18:51:04.583788     661 pod_workers.go:191] Error syncing pod 26175fac-5bc1-416f-b866-36430292c80d ("storage-provisioner_kube-system(26175fac-5bc1-416f-b866-36430292c80d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(26175fac-5bc1-416f-b866-36430292c80d)"
	W1008 18:56:08.622694  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:08 old-k8s-version-265388 kubelet[661]: E1008 18:51:08.571898     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.625651  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:13 old-k8s-version-265388 kubelet[661]: E1008 18:51:13.298924     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:56:08.627906  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:21 old-k8s-version-265388 kubelet[661]: E1008 18:51:21.661055     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.628148  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:25 old-k8s-version-265388 kubelet[661]: E1008 18:51:25.268458     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.628510  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:28 old-k8s-version-265388 kubelet[661]: E1008 18:51:28.571883     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.628863  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:39 old-k8s-version-265388 kubelet[661]: E1008 18:51:39.268329     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.629091  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:39 old-k8s-version-265388 kubelet[661]: E1008 18:51:39.273781     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.629734  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:51 old-k8s-version-265388 kubelet[661]: E1008 18:51:51.739290     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.629948  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:52 old-k8s-version-265388 kubelet[661]: E1008 18:51:52.268075     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.630313  497176 logs.go:138] Found kubelet problem: Oct 08 18:51:58 old-k8s-version-265388 kubelet[661]: E1008 18:51:58.572095     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.635293  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:06 old-k8s-version-265388 kubelet[661]: E1008 18:52:06.276619     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:56:08.635680  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:12 old-k8s-version-265388 kubelet[661]: E1008 18:52:12.267768     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.635901  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:21 old-k8s-version-265388 kubelet[661]: E1008 18:52:21.271213     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.636256  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:24 old-k8s-version-265388 kubelet[661]: E1008 18:52:24.268448     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.636465  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:35 old-k8s-version-265388 kubelet[661]: E1008 18:52:35.268779     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.638588  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:37 old-k8s-version-265388 kubelet[661]: E1008 18:52:37.854127     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.638982  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:38 old-k8s-version-265388 kubelet[661]: E1008 18:52:38.857799     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.639203  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:48 old-k8s-version-265388 kubelet[661]: E1008 18:52:48.268171     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.639557  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:50 old-k8s-version-265388 kubelet[661]: E1008 18:52:50.268221     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.639771  497176 logs.go:138] Found kubelet problem: Oct 08 18:52:59 old-k8s-version-265388 kubelet[661]: E1008 18:52:59.268518     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.640652  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:02 old-k8s-version-265388 kubelet[661]: E1008 18:53:02.267840     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.640888  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:12 old-k8s-version-265388 kubelet[661]: E1008 18:53:12.268232     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.641250  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:13 old-k8s-version-265388 kubelet[661]: E1008 18:53:13.267801     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.641639  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:25 old-k8s-version-265388 kubelet[661]: E1008 18:53:25.268175     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.642022  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:27 old-k8s-version-265388 kubelet[661]: E1008 18:53:27.267995     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.646298  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:39 old-k8s-version-265388 kubelet[661]: E1008 18:53:39.279009     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1008 18:56:08.646700  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:42 old-k8s-version-265388 kubelet[661]: E1008 18:53:42.267851     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.646917  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:51 old-k8s-version-265388 kubelet[661]: E1008 18:53:51.268831     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.647280  497176 logs.go:138] Found kubelet problem: Oct 08 18:53:56 old-k8s-version-265388 kubelet[661]: E1008 18:53:56.267856     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.647491  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:06 old-k8s-version-265388 kubelet[661]: E1008 18:54:06.268365     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.648103  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:11 old-k8s-version-265388 kubelet[661]: E1008 18:54:11.101872     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.650134  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:18 old-k8s-version-265388 kubelet[661]: E1008 18:54:18.571932     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.650407  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:21 old-k8s-version-265388 kubelet[661]: E1008 18:54:21.268711     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.650825  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:29 old-k8s-version-265388 kubelet[661]: E1008 18:54:29.268265     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.651043  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:36 old-k8s-version-265388 kubelet[661]: E1008 18:54:36.268342     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.651395  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:41 old-k8s-version-265388 kubelet[661]: E1008 18:54:41.268698     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.651627  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:49 old-k8s-version-265388 kubelet[661]: E1008 18:54:49.268273     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.651995  497176 logs.go:138] Found kubelet problem: Oct 08 18:54:53 old-k8s-version-265388 kubelet[661]: E1008 18:54:53.268289     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.652204  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:03 old-k8s-version-265388 kubelet[661]: E1008 18:55:03.269930     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.652566  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:06 old-k8s-version-265388 kubelet[661]: E1008 18:55:06.268627     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.652776  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:14 old-k8s-version-265388 kubelet[661]: E1008 18:55:14.268646     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.653623  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:17 old-k8s-version-265388 kubelet[661]: E1008 18:55:17.269172     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.654013  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:29 old-k8s-version-265388 kubelet[661]: E1008 18:55:29.269526     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.654234  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:29 old-k8s-version-265388 kubelet[661]: E1008 18:55:29.273975     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.654447  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:40 old-k8s-version-265388 kubelet[661]: E1008 18:55:40.268227     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.655999  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:41 old-k8s-version-265388 kubelet[661]: E1008 18:55:41.268178     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.656374  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:53 old-k8s-version-265388 kubelet[661]: E1008 18:55:53.270959     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:08.656588  497176 logs.go:138] Found kubelet problem: Oct 08 18:55:55 old-k8s-version-265388 kubelet[661]: E1008 18:55:55.268256     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:08.657958  497176 logs.go:138] Found kubelet problem: Oct 08 18:56:07 old-k8s-version-265388 kubelet[661]: E1008 18:56:07.275689     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	I1008 18:56:08.658025  497176 logs.go:123] Gathering logs for kube-apiserver [9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2] ...
	I1008 18:56:08.658056  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2"
	I1008 18:56:08.778612  497176 logs.go:123] Gathering logs for etcd [67a2f779038d1fc2e1c2a1a0dd241b17fbbf68d3c1ff9aabf2f025cb27cde4ec] ...
	I1008 18:56:08.778688  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67a2f779038d1fc2e1c2a1a0dd241b17fbbf68d3c1ff9aabf2f025cb27cde4ec"
	I1008 18:56:08.869255  497176 logs.go:123] Gathering logs for coredns [d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e] ...
	I1008 18:56:08.869287  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e"
	I1008 18:56:08.987238  497176 logs.go:123] Gathering logs for kube-scheduler [22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e] ...
	I1008 18:56:08.987263  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e"
	I1008 18:56:09.054555  497176 logs.go:123] Gathering logs for kube-proxy [3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52] ...
	I1008 18:56:09.054631  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52"
	I1008 18:56:09.116876  497176 logs.go:123] Gathering logs for storage-provisioner [54bd3f1bc6af7c9dea9e44663dba92abed55601cd24da5f4327db49bd763fec7] ...
	I1008 18:56:09.116937  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54bd3f1bc6af7c9dea9e44663dba92abed55601cd24da5f4327db49bd763fec7"
	I1008 18:56:09.169481  497176 logs.go:123] Gathering logs for containerd ...
	I1008 18:56:09.169562  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1008 18:56:09.232758  497176 logs.go:123] Gathering logs for coredns [c84a70362114befb3c291913d2dccae01be8cebac20b41f21c12a7b8b2a49579] ...
	I1008 18:56:09.232801  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c84a70362114befb3c291913d2dccae01be8cebac20b41f21c12a7b8b2a49579"
	I1008 18:56:09.294358  497176 logs.go:123] Gathering logs for kube-scheduler [3eaac32bdc9ebfcf342b5d3049e7d5c933d8cee0ffc2a3b347619cee2bab6650] ...
	I1008 18:56:09.294395  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eaac32bdc9ebfcf342b5d3049e7d5c933d8cee0ffc2a3b347619cee2bab6650"
	I1008 18:56:09.335736  497176 logs.go:123] Gathering logs for storage-provisioner [b59eb35fd652d4e3f35621866ce9f731d776c0389f2931af23b8aa380fa850a1] ...
	I1008 18:56:09.335764  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b59eb35fd652d4e3f35621866ce9f731d776c0389f2931af23b8aa380fa850a1"
	I1008 18:56:09.375000  497176 logs.go:123] Gathering logs for container status ...
	I1008 18:56:09.375027  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 18:56:09.452431  497176 logs.go:123] Gathering logs for dmesg ...
	I1008 18:56:09.452575  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 18:56:09.487791  497176 logs.go:123] Gathering logs for kube-apiserver [8b125c50b8db07d1a9e5baefe860fc69bcedae71866b8a3970f4a3af946deb47] ...
	I1008 18:56:09.487825  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b125c50b8db07d1a9e5baefe860fc69bcedae71866b8a3970f4a3af946deb47"
	I1008 18:56:09.577527  497176 logs.go:123] Gathering logs for etcd [b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3] ...
	I1008 18:56:09.577562  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3"
	I1008 18:56:09.653886  497176 logs.go:123] Gathering logs for kube-proxy [5bc796ead04f026f0fab0416d7431839d26474c4f270223932ec7177ff050e29] ...
	I1008 18:56:09.654037  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bc796ead04f026f0fab0416d7431839d26474c4f270223932ec7177ff050e29"
	I1008 18:56:09.714977  497176 logs.go:123] Gathering logs for kube-controller-manager [08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9] ...
	I1008 18:56:09.715057  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9"
	I1008 18:56:09.799165  497176 logs.go:123] Gathering logs for kubernetes-dashboard [4730beee8eb17ce1d3b22a81dc0769170618efae5265a4a18e70cbbc4b8c4bff] ...
	I1008 18:56:09.799242  497176 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4730beee8eb17ce1d3b22a81dc0769170618efae5265a4a18e70cbbc4b8c4bff"
	I1008 18:56:09.847126  497176 out.go:358] Setting ErrFile to fd 2...
	I1008 18:56:09.847153  497176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 18:56:09.847203  497176 out.go:270] X Problems detected in kubelet:
	W1008 18:56:09.847219  497176 out.go:270]   Oct 08 18:55:40 old-k8s-version-265388 kubelet[661]: E1008 18:55:40.268227     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:09.847227  497176 out.go:270]   Oct 08 18:55:41 old-k8s-version-265388 kubelet[661]: E1008 18:55:41.268178     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:09.847235  497176 out.go:270]   Oct 08 18:55:53 old-k8s-version-265388 kubelet[661]: E1008 18:55:53.270959     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	W1008 18:56:09.847249  497176 out.go:270]   Oct 08 18:55:55 old-k8s-version-265388 kubelet[661]: E1008 18:55:55.268256     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1008 18:56:09.847255  497176 out.go:270]   Oct 08 18:56:07 old-k8s-version-265388 kubelet[661]: E1008 18:56:07.275689     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	I1008 18:56:09.847261  497176 out.go:358] Setting ErrFile to fd 2...
	I1008 18:56:09.847268  497176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:56:09.115834  506650 addons.go:510] duration metric: took 2.078914131s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1008 18:56:09.178514  506650 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-423092" context rescaled to 1 replicas
	I1008 18:56:10.778638  506650 pod_ready.go:103] pod "coredns-7c65d6cfc9-4vbhx" in "kube-system" namespace has status "Ready":"False"
	I1008 18:56:13.279259  506650 pod_ready.go:103] pod "coredns-7c65d6cfc9-4vbhx" in "kube-system" namespace has status "Ready":"False"
	I1008 18:56:15.279374  506650 pod_ready.go:103] pod "coredns-7c65d6cfc9-4vbhx" in "kube-system" namespace has status "Ready":"False"
	I1008 18:56:17.280004  506650 pod_ready.go:103] pod "coredns-7c65d6cfc9-4vbhx" in "kube-system" namespace has status "Ready":"False"
	I1008 18:56:19.849038  497176 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1008 18:56:19.860801  497176 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1008 18:56:19.863849  497176 out.go:201] 
	W1008 18:56:19.866435  497176 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1008 18:56:19.866472  497176 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1008 18:56:19.866489  497176 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1008 18:56:19.866496  497176 out.go:270] * 
	W1008 18:56:19.867660  497176 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 18:56:19.870754  497176 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	0ada608f921a2       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   1924c1b465c36       dashboard-metrics-scraper-8d5bb5db8-7rgqg
	54bd3f1bc6af7       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         3                   e780ea8ce2c1a       storage-provisioner
	4730beee8eb17       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   b53ee47e566d8       kubernetes-dashboard-cd95d586-w44t2
	c84a70362114b       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   81a67ca68cf17       coredns-74ff55c5b-qc6g5
	5bc796ead04f0       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   267ca07614c4c       kube-proxy-jtkrl
	183d7c259df29       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   956eb99a1e360       busybox
	afe755f054498       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   85e03ad74697a       kindnet-lmt68
	b59eb35fd652d       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         2                   e780ea8ce2c1a       storage-provisioner
	3eaac32bdc9eb       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   d193de4baafee       kube-scheduler-old-k8s-version-265388
	d454ac385124a       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   4cd41a697fe69       kube-controller-manager-old-k8s-version-265388
	8b125c50b8db0       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   dbc83b139bebd       kube-apiserver-old-k8s-version-265388
	67a2f779038d1       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   257a23b8f52b4       etcd-old-k8s-version-265388
	f3b2bfaa5db59       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   90fe3d469a7ea       busybox
	d7ead14d57196       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   3d5d943197e3e       coredns-74ff55c5b-qc6g5
	feaff4cac7e18       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   d5dfcac8217b8       kindnet-lmt68
	3c5e8fd7a714c       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   3efa02eb5884d       kube-proxy-jtkrl
	08def40430066       1df8a2b116bd1       9 minutes ago       Exited              kube-controller-manager     0                   ed779aeb20097       kube-controller-manager-old-k8s-version-265388
	22d0db33e2c93       e7605f88f17d6       9 minutes ago       Exited              kube-scheduler              0                   e06a3e1ecd879       kube-scheduler-old-k8s-version-265388
	9ec59361f1d64       2c08bbbc02d3a       9 minutes ago       Exited              kube-apiserver              0                   8737f1f30d300       kube-apiserver-old-k8s-version-265388
	b54c80399aaf4       05b738aa1bc63       9 minutes ago       Exited              etcd                        0                   70947780ed19c       etcd-old-k8s-version-265388
	
	
	==> containerd <==
	Oct 08 18:52:37 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:52:37.293834131Z" level=info msg="CreateContainer within sandbox \"1924c1b465c360098fbf3b93a560f11aaa4dedacf6b2085f0458e065e340a3b3\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"3066e546308b8fa8973acb9aa41c5a3ccb7bd91f61ea6753cf6ca0aa57cd27e0\""
	Oct 08 18:52:37 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:52:37.294422896Z" level=info msg="StartContainer for \"3066e546308b8fa8973acb9aa41c5a3ccb7bd91f61ea6753cf6ca0aa57cd27e0\""
	Oct 08 18:52:37 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:52:37.374882205Z" level=info msg="StartContainer for \"3066e546308b8fa8973acb9aa41c5a3ccb7bd91f61ea6753cf6ca0aa57cd27e0\" returns successfully"
	Oct 08 18:52:37 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:52:37.408096892Z" level=info msg="shim disconnected" id=3066e546308b8fa8973acb9aa41c5a3ccb7bd91f61ea6753cf6ca0aa57cd27e0 namespace=k8s.io
	Oct 08 18:52:37 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:52:37.408154015Z" level=warning msg="cleaning up after shim disconnected" id=3066e546308b8fa8973acb9aa41c5a3ccb7bd91f61ea6753cf6ca0aa57cd27e0 namespace=k8s.io
	Oct 08 18:52:37 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:52:37.408168209Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 08 18:52:37 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:52:37.855734089Z" level=info msg="RemoveContainer for \"0246ee19f1d3e9d037366a02bad75b2a3de926ec82f15f2250e269ef0c1d4073\""
	Oct 08 18:52:37 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:52:37.861497500Z" level=info msg="RemoveContainer for \"0246ee19f1d3e9d037366a02bad75b2a3de926ec82f15f2250e269ef0c1d4073\" returns successfully"
	Oct 08 18:53:39 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:53:39.268785780Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 08 18:53:39 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:53:39.275246739Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 08 18:53:39 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:53:39.277171998Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 08 18:53:39 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:53:39.277252693Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 08 18:54:10 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:54:10.270050519Z" level=info msg="CreateContainer within sandbox \"1924c1b465c360098fbf3b93a560f11aaa4dedacf6b2085f0458e065e340a3b3\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Oct 08 18:54:10 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:54:10.289667630Z" level=info msg="CreateContainer within sandbox \"1924c1b465c360098fbf3b93a560f11aaa4dedacf6b2085f0458e065e340a3b3\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"0ada608f921a267da4579543acde92d73abf8cde7556d8a153587a009fbbd18c\""
	Oct 08 18:54:10 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:54:10.290401574Z" level=info msg="StartContainer for \"0ada608f921a267da4579543acde92d73abf8cde7556d8a153587a009fbbd18c\""
	Oct 08 18:54:10 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:54:10.367204208Z" level=info msg="StartContainer for \"0ada608f921a267da4579543acde92d73abf8cde7556d8a153587a009fbbd18c\" returns successfully"
	Oct 08 18:54:10 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:54:10.391536378Z" level=info msg="shim disconnected" id=0ada608f921a267da4579543acde92d73abf8cde7556d8a153587a009fbbd18c namespace=k8s.io
	Oct 08 18:54:10 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:54:10.391600869Z" level=warning msg="cleaning up after shim disconnected" id=0ada608f921a267da4579543acde92d73abf8cde7556d8a153587a009fbbd18c namespace=k8s.io
	Oct 08 18:54:10 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:54:10.391614760Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 08 18:54:11 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:54:11.107612552Z" level=info msg="RemoveContainer for \"3066e546308b8fa8973acb9aa41c5a3ccb7bd91f61ea6753cf6ca0aa57cd27e0\""
	Oct 08 18:54:11 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:54:11.113865083Z" level=info msg="RemoveContainer for \"3066e546308b8fa8973acb9aa41c5a3ccb7bd91f61ea6753cf6ca0aa57cd27e0\" returns successfully"
	Oct 08 18:56:20 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:56:20.268797040Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 08 18:56:20 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:56:20.302328575Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 08 18:56:20 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:56:20.304317480Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 08 18:56:20 old-k8s-version-265388 containerd[570]: time="2024-10-08T18:56:20.304416833Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [c84a70362114befb3c291913d2dccae01be8cebac20b41f21c12a7b8b2a49579] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:40764 - 29487 "HINFO IN 1694422570366884499.7646149154239904582. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022614493s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I1008 18:51:04.795770       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-08 18:50:34.795380075 +0000 UTC m=+0.023848842) (total time: 30.000278181s):
	Trace[2019727887]: [30.000278181s] [30.000278181s] END
	E1008 18:51:04.795800       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1008 18:51:04.795843       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-08 18:50:34.795198927 +0000 UTC m=+0.023667695) (total time: 30.000481587s):
	Trace[1427131847]: [30.000481587s] [30.000481587s] END
	E1008 18:51:04.795854       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1008 18:51:04.795888       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-08 18:50:34.795237334 +0000 UTC m=+0.023706111) (total time: 30.000640851s):
	Trace[911902081]: [30.000640851s] [30.000640851s] END
	E1008 18:51:04.795894       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [d7ead14d571966e1422ba5cf786ff4cd0064d33c0afc4fece9f3bfa2b07eb28e] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:57334 - 57798 "HINFO IN 491631864710060197.7390494370173724582. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.033311113s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-265388
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-265388
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=old-k8s-version-265388
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_08T18_47_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 18:47:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-265388
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:56:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 18:51:21 +0000   Tue, 08 Oct 2024 18:47:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 18:51:21 +0000   Tue, 08 Oct 2024 18:47:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 18:51:21 +0000   Tue, 08 Oct 2024 18:47:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 18:51:21 +0000   Tue, 08 Oct 2024 18:47:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-265388
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e31e19ec31254cf59827e0b3d79d9465
	  System UUID:                6cd0fffb-ffff-457f-a069-f7de62b1b503
	  Boot ID:                    b951cf46-640a-45c2-9395-0fcf341c803c
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 coredns-74ff55c5b-qc6g5                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m39s
	  kube-system                 etcd-old-k8s-version-265388                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m46s
	  kube-system                 kindnet-lmt68                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m39s
	  kube-system                 kube-apiserver-old-k8s-version-265388             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 kube-controller-manager-old-k8s-version-265388    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 kube-proxy-jtkrl                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 kube-scheduler-old-k8s-version-265388             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 metrics-server-9975d5f86-6czd4                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m28s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m38s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-7rgqg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-w44t2               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  9m5s (x5 over 9m5s)  kubelet     Node old-k8s-version-265388 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m5s (x4 over 9m5s)  kubelet     Node old-k8s-version-265388 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m5s (x4 over 9m5s)  kubelet     Node old-k8s-version-265388 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m46s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m46s                kubelet     Node old-k8s-version-265388 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m46s                kubelet     Node old-k8s-version-265388 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m46s                kubelet     Node old-k8s-version-265388 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m46s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m39s                kubelet     Node old-k8s-version-265388 status is now: NodeReady
	  Normal  Starting                 8m36s                kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m (x8 over 6m)      kubelet     Node old-k8s-version-265388 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m (x8 over 6m)      kubelet     Node old-k8s-version-265388 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m (x7 over 6m)      kubelet     Node old-k8s-version-265388 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m47s                kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Oct 8 17:31] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [67a2f779038d1fc2e1c2a1a0dd241b17fbbf68d3c1ff9aabf2f025cb27cde4ec] <==
	2024-10-08 18:52:21.514748 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:52:31.514522 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:52:41.514623 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:52:51.516176 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:53:01.517746 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:53:11.514528 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:53:21.514711 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:53:31.514582 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:53:41.514690 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:53:51.514584 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:54:01.515093 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:54:11.514580 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:54:21.514593 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:54:31.514452 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:54:41.514780 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:54:51.514557 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:55:01.514593 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:55:11.516000 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:55:21.514694 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:55:31.514601 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:55:41.514780 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:55:51.515082 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:56:01.514703 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:56:11.514550 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:56:21.514956 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [b54c80399aaf4564bd4b06904a3b370ccb549197289645dffee2bb3d2ff0ffc3] <==
	raft2024/10/08 18:47:17 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/10/08 18:47:17 INFO: ea7e25599daad906 became leader at term 2
	raft2024/10/08 18:47:17 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-10-08 18:47:17.810460 I | etcdserver: setting up the initial cluster version to 3.4
	2024-10-08 18:47:17.813896 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-10-08 18:47:17.814081 I | etcdserver/api: enabled capabilities for version 3.4
	2024-10-08 18:47:17.814210 I | etcdserver: published {Name:old-k8s-version-265388 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-10-08 18:47:17.814311 I | embed: ready to serve client requests
	2024-10-08 18:47:17.815697 I | embed: serving client requests on 127.0.0.1:2379
	2024-10-08 18:47:17.816005 I | embed: ready to serve client requests
	2024-10-08 18:47:17.820399 I | embed: serving client requests on 192.168.76.2:2379
	2024-10-08 18:47:44.201255 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:47:45.547408 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:47:55.547611 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:48:05.547637 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:48:15.547558 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:48:25.547567 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:48:35.547667 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:48:45.547634 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:48:55.547705 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:49:05.547608 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:49:15.547850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:49:25.547649 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:49:35.547918 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-08 18:49:45.547685 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 18:56:21 up  2:38,  0 users,  load average: 1.81, 1.68, 2.13
	Linux old-k8s-version-265388 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [afe755f054498e79e1505269d3e51e49df21447fc15ed828fde47cfb3eb20b15] <==
	I1008 18:54:14.412077       1 main.go:299] handling current node
	I1008 18:54:24.417348       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:54:24.417841       1 main.go:299] handling current node
	I1008 18:54:34.411085       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:54:34.411124       1 main.go:299] handling current node
	I1008 18:54:44.418025       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:54:44.418117       1 main.go:299] handling current node
	I1008 18:54:54.413384       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:54:54.413431       1 main.go:299] handling current node
	I1008 18:55:04.418318       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:55:04.418356       1 main.go:299] handling current node
	I1008 18:55:14.417152       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:55:14.417191       1 main.go:299] handling current node
	I1008 18:55:24.419367       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:55:24.419403       1 main.go:299] handling current node
	I1008 18:55:34.410469       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:55:34.410503       1 main.go:299] handling current node
	I1008 18:55:44.413751       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:55:44.413792       1 main.go:299] handling current node
	I1008 18:55:54.417741       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:55:54.417773       1 main.go:299] handling current node
	I1008 18:56:04.418540       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:56:04.418577       1 main.go:299] handling current node
	I1008 18:56:14.414369       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:56:14.414414       1 main.go:299] handling current node
	
	
	==> kindnet [feaff4cac7e18e4dffb28273b0f94a20325bfe44cc4e5de2d25860bfe7fccc51] <==
	I1008 18:47:46.295185       1 controller.go:374] Syncing nftables rules
	I1008 18:47:56.103333       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:47:56.103387       1 main.go:299] handling current node
	I1008 18:48:06.094295       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:48:06.094330       1 main.go:299] handling current node
	I1008 18:48:16.094307       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:48:16.094363       1 main.go:299] handling current node
	I1008 18:48:26.095108       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:48:26.095202       1 main.go:299] handling current node
	I1008 18:48:36.103379       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:48:36.103415       1 main.go:299] handling current node
	I1008 18:48:46.094277       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:48:46.094309       1 main.go:299] handling current node
	I1008 18:48:56.096170       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:48:56.096201       1 main.go:299] handling current node
	I1008 18:49:06.101737       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:49:06.102044       1 main.go:299] handling current node
	I1008 18:49:16.101858       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:49:16.101890       1 main.go:299] handling current node
	I1008 18:49:26.102358       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:49:26.102392       1 main.go:299] handling current node
	I1008 18:49:36.102522       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:49:36.102560       1 main.go:299] handling current node
	I1008 18:49:46.094271       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1008 18:49:46.094508       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8b125c50b8db07d1a9e5baefe860fc69bcedae71866b8a3970f4a3af946deb47] <==
	I1008 18:52:50.332797       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1008 18:52:50.332806       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1008 18:53:30.657299       1 client.go:360] parsed scheme: "passthrough"
	I1008 18:53:30.657355       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1008 18:53:30.657364       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1008 18:53:34.559473       1 handler_proxy.go:102] no RequestInfo found in the context
	E1008 18:53:34.559553       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1008 18:53:34.559568       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 18:54:15.278658       1 client.go:360] parsed scheme: "passthrough"
	I1008 18:54:15.278701       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1008 18:54:15.278710       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1008 18:54:51.722696       1 client.go:360] parsed scheme: "passthrough"
	I1008 18:54:51.722747       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1008 18:54:51.722756       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1008 18:55:30.690383       1 client.go:360] parsed scheme: "passthrough"
	I1008 18:55:30.690653       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1008 18:55:30.690747       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1008 18:55:31.748177       1 handler_proxy.go:102] no RequestInfo found in the context
	E1008 18:55:31.748258       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1008 18:55:31.748271       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 18:56:04.305341       1 client.go:360] parsed scheme: "passthrough"
	I1008 18:56:04.305384       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1008 18:56:04.305393       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [9ec59361f1d6429f1ab5c5dcc837336c0d659a3fa75d0a75ef798eac2eb509b2] <==
	I1008 18:47:25.044456       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1008 18:47:25.530040       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 18:47:25.567738       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1008 18:47:25.659890       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1008 18:47:25.661123       1 controller.go:606] quota admission added evaluator for: endpoints
	I1008 18:47:25.665272       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 18:47:26.635033       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1008 18:47:27.041494       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1008 18:47:27.102517       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1008 18:47:35.500868       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 18:47:42.591506       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1008 18:47:42.832958       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1008 18:47:49.243179       1 client.go:360] parsed scheme: "passthrough"
	I1008 18:47:49.243221       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1008 18:47:49.243231       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1008 18:48:32.478816       1 client.go:360] parsed scheme: "passthrough"
	I1008 18:48:32.478861       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1008 18:48:32.478871       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1008 18:49:06.493987       1 client.go:360] parsed scheme: "passthrough"
	I1008 18:49:06.494034       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1008 18:49:06.494043       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1008 18:49:44.223114       1 client.go:360] parsed scheme: "passthrough"
	I1008 18:49:44.223168       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1008 18:49:44.223179       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E1008 18:49:51.514384       1 upgradeaware.go:387] Error proxying data from backend to client: write tcp 192.168.76.2:8443->192.168.76.1:38986: write: broken pipe
	
	
	==> kube-controller-manager [08def4043006640553776b8b158db002f7324adc36e7d49013e418cb7be87ea9] <==
	I1008 18:47:42.650803       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I1008 18:47:42.669597       1 shared_informer.go:247] Caches are synced for daemon sets 
	I1008 18:47:42.670706       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I1008 18:47:42.670730       1 shared_informer.go:247] Caches are synced for GC 
	I1008 18:47:42.670767       1 shared_informer.go:247] Caches are synced for endpoint 
	I1008 18:47:42.683975       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I1008 18:47:42.716584       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-rk69g"
	I1008 18:47:42.791004       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-265388" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1008 18:47:42.850835       1 shared_informer.go:247] Caches are synced for resource quota 
	I1008 18:47:42.856246       1 shared_informer.go:247] Caches are synced for job 
	I1008 18:47:42.929802       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-qc6g5"
	I1008 18:47:42.989977       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lmt68"
	I1008 18:47:43.004818       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I1008 18:47:43.004883       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jtkrl"
	I1008 18:47:43.152509       1 request.go:655] Throttling request took 1.002693666s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
	E1008 18:47:43.177958       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"57fd7ca8-b3be-441d-822f-401c2c14bc1f", ResourceVersion:"418", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63864010047, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001c47560), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001c47580)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001c475a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001c475c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001c475e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x400201acc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001c47600), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001c47620), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001c47660)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400203e120), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4002030598), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40005e9030), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001e34bd8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40020305e8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	I1008 18:47:43.205028       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1008 18:47:43.253850       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1008 18:47:43.253914       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1008 18:47:43.941650       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I1008 18:47:43.968054       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-rk69g"
	I1008 18:47:43.998739       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I1008 18:47:43.998847       1 shared_informer.go:247] Caches are synced for resource quota 
	I1008 18:47:47.633746       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1008 18:49:52.858481       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-controller-manager [d454ac385124a434ff99a3a89b796978a0925f0d09f2b44fc3e895b7402a54de] <==
	W1008 18:51:53.055066       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1008 18:52:19.096298       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1008 18:52:24.705595       1 request.go:655] Throttling request took 1.047672511s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1008 18:52:25.557526       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1008 18:52:49.598081       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1008 18:52:57.208028       1 request.go:655] Throttling request took 1.048519527s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W1008 18:52:58.059548       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1008 18:53:20.099931       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1008 18:53:29.709984       1 request.go:655] Throttling request took 1.048408349s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W1008 18:53:30.561482       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1008 18:53:50.601852       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1008 18:54:02.212112       1 request.go:655] Throttling request took 1.048420505s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1?timeout=32s
	W1008 18:54:03.063569       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1008 18:54:21.103803       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1008 18:54:34.714064       1 request.go:655] Throttling request took 1.04806202s, request: GET:https://192.168.76.2:8443/apis/policy/v1beta1?timeout=32s
	W1008 18:54:35.565629       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1008 18:54:51.605605       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1008 18:55:07.216204       1 request.go:655] Throttling request took 1.0482768s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1008 18:55:08.067725       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1008 18:55:22.107739       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1008 18:55:39.718171       1 request.go:655] Throttling request took 1.048162623s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1008 18:55:40.569824       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1008 18:55:52.609703       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1008 18:56:12.220240       1 request.go:655] Throttling request took 1.048263273s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1008 18:56:13.071658       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [3c5e8fd7a714c3144d4156aa93a78c131393bbef6d2da9bbdf4a931b6cfaeb52] <==
	I1008 18:47:45.602304       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1008 18:47:45.602396       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1008 18:47:45.651405       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1008 18:47:45.651495       1 server_others.go:185] Using iptables Proxier.
	I1008 18:47:45.651727       1 server.go:650] Version: v1.20.0
	I1008 18:47:45.652371       1 config.go:315] Starting service config controller
	I1008 18:47:45.652388       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1008 18:47:45.657824       1 config.go:224] Starting endpoint slice config controller
	I1008 18:47:45.657845       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1008 18:47:45.752473       1 shared_informer.go:247] Caches are synced for service config 
	I1008 18:47:45.758333       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [5bc796ead04f026f0fab0416d7431839d26474c4f270223932ec7177ff050e29] <==
	I1008 18:50:34.775240       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1008 18:50:34.775408       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1008 18:50:34.799095       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1008 18:50:34.799304       1 server_others.go:185] Using iptables Proxier.
	I1008 18:50:34.799618       1 server.go:650] Version: v1.20.0
	I1008 18:50:34.800450       1 config.go:315] Starting service config controller
	I1008 18:50:34.800520       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1008 18:50:34.800566       1 config.go:224] Starting endpoint slice config controller
	I1008 18:50:34.800653       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1008 18:50:34.901116       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1008 18:50:34.901358       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [22d0db33e2c93328e74900446a3874f986482a9c044ee0cb834ae52376bc848e] <==
	W1008 18:47:24.159693       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1008 18:47:24.159726       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1008 18:47:24.159739       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1008 18:47:24.159744       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1008 18:47:24.223133       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1008 18:47:24.223312       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1008 18:47:24.223436       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1008 18:47:24.223536       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1008 18:47:24.237890       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1008 18:47:24.238277       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1008 18:47:24.238497       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1008 18:47:24.238762       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1008 18:47:24.238953       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1008 18:47:24.239137       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1008 18:47:24.239368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1008 18:47:24.239606       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1008 18:47:24.239790       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1008 18:47:24.253307       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1008 18:47:24.256629       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1008 18:47:24.256991       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1008 18:47:25.219907       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1008 18:47:25.247870       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1008 18:47:25.296113       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1008 18:47:25.302510       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1008 18:47:27.723775       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [3eaac32bdc9ebfcf342b5d3049e7d5c933d8cee0ffc2a3b347619cee2bab6650] <==
	I1008 18:50:26.679074       1 serving.go:331] Generated self-signed cert in-memory
	W1008 18:50:30.713112       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1008 18:50:30.713136       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1008 18:50:30.713145       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1008 18:50:30.713150       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1008 18:50:30.997127       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1008 18:50:30.997214       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1008 18:50:30.997220       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1008 18:50:30.997232       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1008 18:50:31.197807       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Oct 08 18:54:53 old-k8s-version-265388 kubelet[661]: E1008 18:54:53.268289     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	Oct 08 18:55:03 old-k8s-version-265388 kubelet[661]: E1008 18:55:03.269930     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 08 18:55:06 old-k8s-version-265388 kubelet[661]: I1008 18:55:06.267537     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0ada608f921a267da4579543acde92d73abf8cde7556d8a153587a009fbbd18c
	Oct 08 18:55:06 old-k8s-version-265388 kubelet[661]: E1008 18:55:06.268627     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	Oct 08 18:55:14 old-k8s-version-265388 kubelet[661]: E1008 18:55:14.268646     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 08 18:55:17 old-k8s-version-265388 kubelet[661]: I1008 18:55:17.268818     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0ada608f921a267da4579543acde92d73abf8cde7556d8a153587a009fbbd18c
	Oct 08 18:55:17 old-k8s-version-265388 kubelet[661]: E1008 18:55:17.269172     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	Oct 08 18:55:29 old-k8s-version-265388 kubelet[661]: I1008 18:55:29.268602     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0ada608f921a267da4579543acde92d73abf8cde7556d8a153587a009fbbd18c
	Oct 08 18:55:29 old-k8s-version-265388 kubelet[661]: E1008 18:55:29.269526     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	Oct 08 18:55:29 old-k8s-version-265388 kubelet[661]: E1008 18:55:29.273975     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 08 18:55:40 old-k8s-version-265388 kubelet[661]: E1008 18:55:40.268227     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 08 18:55:41 old-k8s-version-265388 kubelet[661]: I1008 18:55:41.267854     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0ada608f921a267da4579543acde92d73abf8cde7556d8a153587a009fbbd18c
	Oct 08 18:55:41 old-k8s-version-265388 kubelet[661]: E1008 18:55:41.268178     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	Oct 08 18:55:53 old-k8s-version-265388 kubelet[661]: I1008 18:55:53.270619     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0ada608f921a267da4579543acde92d73abf8cde7556d8a153587a009fbbd18c
	Oct 08 18:55:53 old-k8s-version-265388 kubelet[661]: E1008 18:55:53.270959     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	Oct 08 18:55:55 old-k8s-version-265388 kubelet[661]: E1008 18:55:55.268256     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 08 18:56:07 old-k8s-version-265388 kubelet[661]: I1008 18:56:07.274405     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0ada608f921a267da4579543acde92d73abf8cde7556d8a153587a009fbbd18c
	Oct 08 18:56:07 old-k8s-version-265388 kubelet[661]: E1008 18:56:07.275689     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	Oct 08 18:56:09 old-k8s-version-265388 kubelet[661]: E1008 18:56:09.276208     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 08 18:56:20 old-k8s-version-265388 kubelet[661]: E1008 18:56:20.306913     661 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 08 18:56:20 old-k8s-version-265388 kubelet[661]: E1008 18:56:20.306979     661 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 08 18:56:20 old-k8s-version-265388 kubelet[661]: E1008 18:56:20.307143     661 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-x2kc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-6czd4_kube-system(3476948
c-48fc-4a89-9eac-8fd486db2af9): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 08 18:56:20 old-k8s-version-265388 kubelet[661]: E1008 18:56:20.307176     661 pod_workers.go:191] Error syncing pod 3476948c-48fc-4a89-9eac-8fd486db2af9 ("metrics-server-9975d5f86-6czd4_kube-system(3476948c-48fc-4a89-9eac-8fd486db2af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 08 18:56:21 old-k8s-version-265388 kubelet[661]: I1008 18:56:21.267982     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0ada608f921a267da4579543acde92d73abf8cde7556d8a153587a009fbbd18c
	Oct 08 18:56:21 old-k8s-version-265388 kubelet[661]: E1008 18:56:21.268302     661 pod_workers.go:191] Error syncing pod 90d7e508-258c-489c-a3bd-5b0ad20d450c ("dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7rgqg_kubernetes-dashboard(90d7e508-258c-489c-a3bd-5b0ad20d450c)"
	
	
	==> kubernetes-dashboard [4730beee8eb17ce1d3b22a81dc0769170618efae5265a4a18e70cbbc4b8c4bff] <==
	2024/10/08 18:50:54 Starting overwatch
	2024/10/08 18:50:54 Using namespace: kubernetes-dashboard
	2024/10/08 18:50:54 Using in-cluster config to connect to apiserver
	2024/10/08 18:50:54 Using secret token for csrf signing
	2024/10/08 18:50:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/10/08 18:50:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/10/08 18:50:54 Successful initial request to the apiserver, version: v1.20.0
	2024/10/08 18:50:54 Generating JWE encryption key
	2024/10/08 18:50:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/10/08 18:50:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/10/08 18:50:55 Initializing JWE encryption key from synchronized object
	2024/10/08 18:50:55 Creating in-cluster Sidecar client
	2024/10/08 18:50:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/08 18:50:55 Serving insecurely on HTTP port: 9090
	2024/10/08 18:51:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/08 18:51:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/08 18:52:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/08 18:52:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/08 18:53:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/08 18:53:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/08 18:54:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/08 18:54:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/08 18:55:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/08 18:55:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [54bd3f1bc6af7c9dea9e44663dba92abed55601cd24da5f4327db49bd763fec7] <==
	I1008 18:51:17.383500       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 18:51:17.397481       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 18:51:17.397531       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1008 18:51:34.873461       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 18:51:34.874283       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"769f223e-c49c-45ab-9cbe-c15c58619e3a", APIVersion:"v1", ResourceVersion:"861", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-265388_f1909bf8-d153-45a3-a3e4-dcfe43970ed8 became leader
	I1008 18:51:34.874477       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-265388_f1909bf8-d153-45a3-a3e4-dcfe43970ed8!
	I1008 18:51:34.974946       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-265388_f1909bf8-d153-45a3-a3e4-dcfe43970ed8!
	
	
	==> storage-provisioner [b59eb35fd652d4e3f35621866ce9f731d776c0389f2931af23b8aa380fa850a1] <==
	I1008 18:50:33.717377       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1008 18:51:03.723996       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-265388 -n old-k8s-version-265388
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-265388 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-6czd4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-265388 describe pod metrics-server-9975d5f86-6czd4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-265388 describe pod metrics-server-9975d5f86-6czd4: exit status 1 (106.597967ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-6czd4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-265388 describe pod metrics-server-9975d5f86-6czd4: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (377.02s)

                                                
                                    

Test pass (299/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.27
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 8.06
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 156.07
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/PullSecret 8.87
34 TestAddons/parallel/Registry 15.78
35 TestAddons/parallel/Ingress 18.87
36 TestAddons/parallel/InspektorGadget 11.77
37 TestAddons/parallel/MetricsServer 6.8
39 TestAddons/parallel/CSI 48.62
40 TestAddons/parallel/Headlamp 18.26
41 TestAddons/parallel/CloudSpanner 5.68
42 TestAddons/parallel/LocalPath 8.89
43 TestAddons/parallel/NvidiaDevicePlugin 5.62
44 TestAddons/parallel/Yakd 11.93
45 TestAddons/StoppedEnableDisable 12.31
46 TestCertOptions 31.28
47 TestCertExpiration 232.75
49 TestForceSystemdFlag 33.92
50 TestForceSystemdEnv 46.26
51 TestDockerEnvContainerd 44.81
56 TestErrorSpam/setup 29.55
57 TestErrorSpam/start 0.73
58 TestErrorSpam/status 1.05
59 TestErrorSpam/pause 1.9
60 TestErrorSpam/unpause 1.84
61 TestErrorSpam/stop 1.48
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 87.14
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 5.79
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.94
73 TestFunctional/serial/CacheCmd/cache/add_local 1.27
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2
78 TestFunctional/serial/CacheCmd/cache/delete 0.13
79 TestFunctional/serial/MinikubeKubectlCmd 0.16
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
81 TestFunctional/serial/ExtraConfig 43.08
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.72
84 TestFunctional/serial/LogsFileCmd 1.7
85 TestFunctional/serial/InvalidService 4.41
87 TestFunctional/parallel/ConfigCmd 0.47
88 TestFunctional/parallel/DashboardCmd 12.14
89 TestFunctional/parallel/DryRun 0.38
90 TestFunctional/parallel/InternationalLanguage 0.21
91 TestFunctional/parallel/StatusCmd 1.11
95 TestFunctional/parallel/ServiceCmdConnect 9.63
96 TestFunctional/parallel/AddonsCmd 0.2
97 TestFunctional/parallel/PersistentVolumeClaim 23.82
99 TestFunctional/parallel/SSHCmd 0.66
100 TestFunctional/parallel/CpCmd 2.25
102 TestFunctional/parallel/FileSync 0.37
103 TestFunctional/parallel/CertSync 2.14
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
111 TestFunctional/parallel/License 0.27
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.44
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
125 TestFunctional/parallel/ProfileCmd/profile_list 0.42
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
127 TestFunctional/parallel/MountCmd/any-port 7.18
128 TestFunctional/parallel/ServiceCmd/List 0.57
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
131 TestFunctional/parallel/ServiceCmd/Format 0.37
132 TestFunctional/parallel/ServiceCmd/URL 0.36
133 TestFunctional/parallel/MountCmd/specific-port 1.96
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.89
135 TestFunctional/parallel/Version/short 0.09
136 TestFunctional/parallel/Version/components 1.2
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.81
142 TestFunctional/parallel/ImageCommands/Setup 0.74
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.45
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.36
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.33
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.85
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.49
153 TestFunctional/delete_echo-server_images 0.05
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 117.25
160 TestMultiControlPlane/serial/DeployApp 30.69
161 TestMultiControlPlane/serial/PingHostFromPods 1.64
162 TestMultiControlPlane/serial/AddWorkerNode 24.35
163 TestMultiControlPlane/serial/NodeLabels 0.1
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.96
165 TestMultiControlPlane/serial/CopyFile 18.64
166 TestMultiControlPlane/serial/StopSecondaryNode 12.88
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.77
168 TestMultiControlPlane/serial/RestartSecondaryNode 31.16
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.97
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 132.69
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.65
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.76
173 TestMultiControlPlane/serial/StopCluster 36.08
174 TestMultiControlPlane/serial/RestartCluster 52.98
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
176 TestMultiControlPlane/serial/AddSecondaryNode 44.43
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.01
181 TestJSONOutput/start/Command 47.42
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.74
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.65
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.8
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.23
206 TestKicCustomNetwork/create_custom_network 38.15
207 TestKicCustomNetwork/use_default_bridge_network 33.81
208 TestKicExistingNetwork 30.09
209 TestKicCustomSubnet 32.78
210 TestKicStaticIP 32.14
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 66.47
215 TestMountStart/serial/StartWithMountFirst 5.9
216 TestMountStart/serial/VerifyMountFirst 0.26
217 TestMountStart/serial/StartWithMountSecond 6.26
218 TestMountStart/serial/VerifyMountSecond 0.27
219 TestMountStart/serial/DeleteFirst 1.63
220 TestMountStart/serial/VerifyMountPostDelete 0.27
221 TestMountStart/serial/Stop 1.21
222 TestMountStart/serial/RestartStopped 7.22
223 TestMountStart/serial/VerifyMountPostStop 0.26
226 TestMultiNode/serial/FreshStart2Nodes 69.38
227 TestMultiNode/serial/DeployApp2Nodes 16.04
228 TestMultiNode/serial/PingHostFrom2Pods 0.99
229 TestMultiNode/serial/AddNode 15.85
230 TestMultiNode/serial/MultiNodeLabels 0.1
231 TestMultiNode/serial/ProfileList 0.67
232 TestMultiNode/serial/CopyFile 9.87
233 TestMultiNode/serial/StopNode 2.24
234 TestMultiNode/serial/StartAfterStop 9.48
235 TestMultiNode/serial/RestartKeepsNodes 80.08
236 TestMultiNode/serial/DeleteNode 5.22
237 TestMultiNode/serial/StopMultiNode 24.04
238 TestMultiNode/serial/RestartMultiNode 50.36
239 TestMultiNode/serial/ValidateNameConflict 31.86
244 TestPreload 112.12
246 TestScheduledStopUnix 107.71
249 TestInsufficientStorage 12.46
250 TestRunningBinaryUpgrade 85.39
252 TestKubernetesUpgrade 343.64
253 TestMissingContainerUpgrade 167.43
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 38.94
257 TestNoKubernetes/serial/StartWithStopK8s 17
258 TestNoKubernetes/serial/Start 5.35
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
260 TestNoKubernetes/serial/ProfileList 0.97
261 TestNoKubernetes/serial/Stop 1.21
262 TestNoKubernetes/serial/StartNoArgs 6.5
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
264 TestStoppedBinaryUpgrade/Setup 0.89
265 TestStoppedBinaryUpgrade/Upgrade 104.28
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.26
275 TestPause/serial/Start 90.15
279 TestPause/serial/SecondStartNoReconfiguration 6.86
284 TestNetworkPlugins/group/false 4.69
285 TestPause/serial/Pause 0.9
289 TestPause/serial/VerifyStatus 0.38
290 TestPause/serial/Unpause 0.82
291 TestPause/serial/PauseAgain 1.2
292 TestPause/serial/DeletePaused 2.95
293 TestPause/serial/VerifyDeletedResources 0.16
295 TestStartStop/group/old-k8s-version/serial/FirstStart 176.97
297 TestStartStop/group/no-preload/serial/FirstStart 60.34
298 TestStartStop/group/old-k8s-version/serial/DeployApp 10.8
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.91
300 TestStartStop/group/old-k8s-version/serial/Stop 12.54
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
303 TestStartStop/group/no-preload/serial/DeployApp 10.45
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.94
305 TestStartStop/group/no-preload/serial/Stop 12.38
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
307 TestStartStop/group/no-preload/serial/SecondStart 269.56
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
311 TestStartStop/group/no-preload/serial/Pause 3.06
313 TestStartStop/group/embed-certs/serial/FirstStart 79.1
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
317 TestStartStop/group/old-k8s-version/serial/Pause 2.94
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.94
320 TestStartStop/group/embed-certs/serial/DeployApp 8.48
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.35
322 TestStartStop/group/embed-certs/serial/Stop 12.56
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
324 TestStartStop/group/embed-certs/serial/SecondStart 266.54
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.38
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.12
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 303.85
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
333 TestStartStop/group/embed-certs/serial/Pause 3.04
335 TestStartStop/group/newest-cni/serial/FirstStart 36.49
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
338 TestStartStop/group/newest-cni/serial/Stop 1.25
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
340 TestStartStop/group/newest-cni/serial/SecondStart 15.61
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
344 TestStartStop/group/newest-cni/serial/Pause 3
345 TestNetworkPlugins/group/auto/Start 54.51
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.24
350 TestNetworkPlugins/group/auto/KubeletFlags 0.28
351 TestNetworkPlugins/group/auto/NetCatPod 9.36
352 TestNetworkPlugins/group/kindnet/Start 91.38
353 TestNetworkPlugins/group/auto/DNS 0.23
354 TestNetworkPlugins/group/auto/Localhost 0.18
355 TestNetworkPlugins/group/auto/HairPin 0.17
356 TestNetworkPlugins/group/calico/Start 57.82
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
359 TestNetworkPlugins/group/calico/KubeletFlags 0.29
360 TestNetworkPlugins/group/calico/NetCatPod 11.28
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
362 TestNetworkPlugins/group/kindnet/NetCatPod 9.36
363 TestNetworkPlugins/group/kindnet/DNS 0.2
364 TestNetworkPlugins/group/kindnet/Localhost 0.16
365 TestNetworkPlugins/group/kindnet/HairPin 0.16
366 TestNetworkPlugins/group/calico/DNS 0.17
367 TestNetworkPlugins/group/calico/Localhost 0.17
368 TestNetworkPlugins/group/calico/HairPin 0.16
369 TestNetworkPlugins/group/custom-flannel/Start 62.36
370 TestNetworkPlugins/group/enable-default-cni/Start 80.36
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.27
373 TestNetworkPlugins/group/custom-flannel/DNS 0.21
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.36
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
381 TestNetworkPlugins/group/flannel/Start 56.35
382 TestNetworkPlugins/group/bridge/Start 73.39
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
385 TestNetworkPlugins/group/flannel/NetCatPod 12.27
386 TestNetworkPlugins/group/flannel/DNS 0.17
387 TestNetworkPlugins/group/flannel/Localhost 0.15
388 TestNetworkPlugins/group/flannel/HairPin 0.15
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
390 TestNetworkPlugins/group/bridge/NetCatPod 9.27
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.16
393 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (14.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-945652 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-945652 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (14.2706919s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (14.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1008 18:01:18.876105  288541 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1008 18:01:18.876189  288541 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-945652
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-945652: exit status 85 (67.35511ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-945652 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC |          |
	|         | -p download-only-945652        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 18:01:04
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 18:01:04.655195  288546 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:01:04.655625  288546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:01:04.655654  288546 out.go:358] Setting ErrFile to fd 2...
	I1008 18:01:04.655673  288546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:01:04.655993  288546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
	W1008 18:01:04.656182  288546 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19774-283126/.minikube/config/config.json: open /home/jenkins/minikube-integration/19774-283126/.minikube/config/config.json: no such file or directory
	I1008 18:01:04.656681  288546 out.go:352] Setting JSON to true
	I1008 18:01:04.657648  288546 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6213,"bootTime":1728404252,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1008 18:01:04.657780  288546 start.go:139] virtualization:  
	I1008 18:01:04.660786  288546 out.go:97] [download-only-945652] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1008 18:01:04.660971  288546 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball: no such file or directory
	I1008 18:01:04.661070  288546 notify.go:220] Checking for updates...
	I1008 18:01:04.663695  288546 out.go:169] MINIKUBE_LOCATION=19774
	I1008 18:01:04.665396  288546 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:01:04.667201  288546 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig
	I1008 18:01:04.668792  288546 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube
	I1008 18:01:04.670383  288546 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1008 18:01:04.673266  288546 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1008 18:01:04.673583  288546 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:01:04.694218  288546 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1008 18:01:04.694337  288546 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 18:01:04.759506  288546 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-08 18:01:04.749979217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1008 18:01:04.759621  288546 docker.go:318] overlay module found
	I1008 18:01:04.761248  288546 out.go:97] Using the docker driver based on user configuration
	I1008 18:01:04.761274  288546 start.go:297] selected driver: docker
	I1008 18:01:04.761288  288546 start.go:901] validating driver "docker" against <nil>
	I1008 18:01:04.761394  288546 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 18:01:04.805881  288546 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-08 18:01:04.796418618 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1008 18:01:04.806093  288546 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 18:01:04.806379  288546 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1008 18:01:04.806541  288546 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 18:01:04.808229  288546 out.go:169] Using Docker driver with root privileges
	I1008 18:01:04.809503  288546 cni.go:84] Creating CNI manager for ""
	I1008 18:01:04.809574  288546 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1008 18:01:04.809588  288546 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 18:01:04.809709  288546 start.go:340] cluster config:
	{Name:download-only-945652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-945652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:01:04.811023  288546 out.go:97] Starting "download-only-945652" primary control-plane node in "download-only-945652" cluster
	I1008 18:01:04.811042  288546 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1008 18:01:04.812132  288546 out.go:97] Pulling base image v0.0.45-1728382586-19774 ...
	I1008 18:01:04.812154  288546 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1008 18:01:04.812302  288546 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1008 18:01:04.827721  288546 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1008 18:01:04.828270  288546 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1008 18:01:04.828374  288546 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1008 18:01:04.875286  288546 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1008 18:01:04.875313  288546 cache.go:56] Caching tarball of preloaded images
	I1008 18:01:04.875467  288546 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1008 18:01:04.876875  288546 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1008 18:01:04.876903  288546 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1008 18:01:04.967001  288546 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1008 18:01:10.602321  288546 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1008 18:01:10.602425  288546 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1008 18:01:11.731158  288546 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I1008 18:01:11.731592  288546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/download-only-945652/config.json ...
	I1008 18:01:11.731629  288546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/download-only-945652/config.json: {Name:mk1014d40840e58f3a29cb8cdf9055dadd54a645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:01:11.731822  288546 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1008 18:01:11.732008  288546 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19774-283126/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-945652 host does not exist
	  To start a cluster, run: "minikube start -p download-only-945652"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-945652
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (8.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-063477 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-063477 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.060713874s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (8.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1008 18:01:27.347095  288541 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I1008 18:01:27.347135  288541 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-063477
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-063477: exit status 85 (71.484689ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-945652 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC |                     |
	|         | -p download-only-945652        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	| delete  | -p download-only-945652        | download-only-945652 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	| start   | -o=json --download-only        | download-only-063477 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC |                     |
	|         | -p download-only-063477        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 18:01:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 18:01:19.335032  288749 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:01:19.335161  288749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:01:19.335172  288749 out.go:358] Setting ErrFile to fd 2...
	I1008 18:01:19.335178  288749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:01:19.335446  288749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
	I1008 18:01:19.335836  288749 out.go:352] Setting JSON to true
	I1008 18:01:19.336740  288749 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6228,"bootTime":1728404252,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1008 18:01:19.336813  288749 start.go:139] virtualization:  
	I1008 18:01:19.338390  288749 out.go:97] [download-only-063477] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1008 18:01:19.338635  288749 notify.go:220] Checking for updates...
	I1008 18:01:19.339702  288749 out.go:169] MINIKUBE_LOCATION=19774
	I1008 18:01:19.341048  288749 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:01:19.342281  288749 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig
	I1008 18:01:19.343247  288749 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube
	I1008 18:01:19.344170  288749 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1008 18:01:19.346230  288749 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1008 18:01:19.346497  288749 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:01:19.366898  288749 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1008 18:01:19.367027  288749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 18:01:19.424271  288749 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-08 18:01:19.413627696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1008 18:01:19.424388  288749 docker.go:318] overlay module found
	I1008 18:01:19.425590  288749 out.go:97] Using the docker driver based on user configuration
	I1008 18:01:19.425613  288749 start.go:297] selected driver: docker
	I1008 18:01:19.425620  288749 start.go:901] validating driver "docker" against <nil>
	I1008 18:01:19.425806  288749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 18:01:19.474571  288749 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-08 18:01:19.465512984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1008 18:01:19.474776  288749 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 18:01:19.475068  288749 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1008 18:01:19.475222  288749 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 18:01:19.476544  288749 out.go:169] Using Docker driver with root privileges
	I1008 18:01:19.477693  288749 cni.go:84] Creating CNI manager for ""
	I1008 18:01:19.477762  288749 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1008 18:01:19.477776  288749 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 18:01:19.477872  288749 start.go:340] cluster config:
	{Name:download-only-063477 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-063477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:01:19.479043  288749 out.go:97] Starting "download-only-063477" primary control-plane node in "download-only-063477" cluster
	I1008 18:01:19.479060  288749 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1008 18:01:19.480027  288749 out.go:97] Pulling base image v0.0.45-1728382586-19774 ...
	I1008 18:01:19.480058  288749 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1008 18:01:19.480164  288749 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1008 18:01:19.495249  288749 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1008 18:01:19.495378  288749 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1008 18:01:19.495401  288749 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1008 18:01:19.495406  288749 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1008 18:01:19.495414  288749 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1008 18:01:19.544911  288749 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1008 18:01:19.544937  288749 cache.go:56] Caching tarball of preloaded images
	I1008 18:01:19.546174  288749 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1008 18:01:19.547385  288749 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1008 18:01:19.547404  288749 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I1008 18:01:19.644732  288749 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19774-283126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-063477 host does not exist
	  To start a cluster, run: "minikube start -p download-only-063477"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-063477
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
I1008 18:01:28.588436  288541 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-075119 --alsologtostderr --binary-mirror http://127.0.0.1:34241 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-075119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-075119
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:934: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-246349
addons_test.go:934: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-246349: exit status 85 (76.859522ms)

                                                
                                                
-- stdout --
	* Profile "addons-246349" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-246349"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-246349
addons_test.go:945: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-246349: exit status 85 (75.757129ms)

                                                
                                                
-- stdout --
	* Profile "addons-246349" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-246349"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (156.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-246349 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-246349 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m36.070800074s)
--- PASS: TestAddons/Setup (156.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-246349 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-246349 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/PullSecret (8.87s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-246349 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-246349 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [30a8fc0c-d31d-4ba1-aa42-26365c510bd1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [30a8fc0c-d31d-4ba1-aa42-26365c510bd1] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: integration-test=busybox healthy within 8.003509429s
addons_test.go:633: (dbg) Run:  kubectl --context addons-246349 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-246349 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-246349 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-246349 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/PullSecret (8.87s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.194779ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-8tr5n" [0ecafdb8-54b7-4fd2-a93c-946dbacc3308] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003616759s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-827n9" [5050ac4c-9bae-47a6-9b15-3fd5cae17f26] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003249156s
addons_test.go:331: (dbg) Run:  kubectl --context addons-246349 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-246349 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-246349 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.808687716s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-246349 ip
2024/10/08 18:08:09 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-246349 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.78s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-246349 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-246349 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-246349 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [95102c53-a61c-4dbe-bcba-d08d2932a5ec] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [95102c53-a61c-4dbe-bcba-d08d2932a5ec] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.00486565s
I1008 18:09:03.347191  288541 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-246349 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-246349 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-246349 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-246349 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-246349 addons disable ingress-dns --alsologtostderr -v=1: (1.474391236s)
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-246349 addons disable ingress --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-246349 addons disable ingress --alsologtostderr -v=1: (7.784732907s)
--- PASS: TestAddons/parallel/Ingress (18.87s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nff5l" [3125bc06-e78a-4a71-91d5-947595eb110a] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003337561s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-246349 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-246349 addons disable inspektor-gadget --alsologtostderr -v=1: (5.769823702s)
--- PASS: TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.616511ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-4g8nz" [b47dc422-e583-458b-a57a-f97fb1c1ea0c] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006537549s
addons_test.go:402: (dbg) Run:  kubectl --context addons-246349 top pods -n kube-system
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-246349 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1008 18:08:19.321812  288541 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1008 18:08:19.371942  288541 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1008 18:08:19.372282  288541 kapi.go:107] duration metric: took 52.223433ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 52.442008ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-246349 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-246349 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f4593f72-8631-4cbc-9541-dcb8bc0806d9] Pending
helpers_test.go:344: "task-pv-pod" [f4593f72-8631-4cbc-9541-dcb8bc0806d9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f4593f72-8631-4cbc-9541-dcb8bc0806d9] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004449087s
addons_test.go:511: (dbg) Run:  kubectl --context addons-246349 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-246349 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-246349 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-246349 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-246349 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-246349 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-246349 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b4051c16-9e2d-4eea-a26d-385cc89ac61b] Pending
helpers_test.go:344: "task-pv-pod-restore" [b4051c16-9e2d-4eea-a26d-385cc89ac61b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b4051c16-9e2d-4eea-a26d-385cc89ac61b] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.0041804s
addons_test.go:553: (dbg) Run:  kubectl --context addons-246349 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-246349 delete pod task-pv-pod-restore: (1.404230766s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-246349 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-246349 delete volumesnapshot new-snapshot-demo
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-246349 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-246349 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-246349 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.874139275s)
--- PASS: TestAddons/parallel/CSI (48.62s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-246349 --alsologtostderr -v=1
addons_test.go:743: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-246349 --alsologtostderr -v=1: (1.382550658s)
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-485nc" [f570ea55-e52b-4df3-831b-5b3829842798] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-485nc" [f570ea55-e52b-4df3-831b-5b3829842798] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004400927s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-246349 addons disable headlamp --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-246349 addons disable headlamp --alsologtostderr -v=1: (5.876305723s)
--- PASS: TestAddons/parallel/Headlamp (18.26s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-b4d47" [9eef6ca8-74e8-4eda-9ccb-43f413d0b3dd] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004262343s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-246349 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.89s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:883: (dbg) Run:  kubectl --context addons-246349 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:889: (dbg) Run:  kubectl --context addons-246349 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:893: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246349 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [dd5a9f82-fd4e-40ac-b746-5bdffc10384c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [dd5a9f82-fd4e-40ac-b746-5bdffc10384c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [dd5a9f82-fd4e-40ac-b746-5bdffc10384c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.005963485s
addons_test.go:901: (dbg) Run:  kubectl --context addons-246349 get pvc test-pvc -o=json
addons_test.go:910: (dbg) Run:  out/minikube-linux-arm64 -p addons-246349 ssh "cat /opt/local-path-provisioner/pvc-46f371f9-dfb1-4188-88a9-68245fb1a105_default_test-pvc/file1"
addons_test.go:922: (dbg) Run:  kubectl --context addons-246349 delete pod test-local-path
addons_test.go:926: (dbg) Run:  kubectl --context addons-246349 delete pvc test-pvc
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-246349 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.89s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5d4vx" [dafed154-2336-4889-8370-c2b31d4fc071] Running
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004070726s
addons_test.go:961: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-246349
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.62s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-8ztv6" [e715eff6-e646-4d4f-881e-8572e2b45a2c] Running
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003766188s
addons_test.go:973: (dbg) Run:  out/minikube-linux-arm64 -p addons-246349 addons disable yakd --alsologtostderr -v=1
addons_test.go:973: (dbg) Done: out/minikube-linux-arm64 -p addons-246349 addons disable yakd --alsologtostderr -v=1: (5.926881667s)
--- PASS: TestAddons/parallel/Yakd (11.93s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-246349
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-246349: (12.028463133s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-246349
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-246349
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-246349
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (31.28s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-178809 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-178809 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (28.618548491s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-178809 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-178809 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-178809 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-178809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-178809
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-178809: (1.976198912s)
--- PASS: TestCertOptions (31.28s)

                                                
                                    
x
+
TestCertExpiration (232.75s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-974463 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-974463 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (42.820498213s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-974463 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-974463 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.54173473s)
helpers_test.go:175: Cleaning up "cert-expiration-974463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-974463
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-974463: (2.390580962s)
--- PASS: TestCertExpiration (232.75s)

                                                
                                    
x
+
TestForceSystemdFlag (33.92s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-730485 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-730485 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.63418456s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-730485 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-730485" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-730485
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-730485: (1.982076112s)
--- PASS: TestForceSystemdFlag (33.92s)

                                                
                                    
x
+
TestForceSystemdEnv (46.26s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-275989 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-275989 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.619759485s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-275989 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-275989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-275989
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-275989: (2.264351979s)
--- PASS: TestForceSystemdEnv (46.26s)

                                                
                                    
x
+
TestDockerEnvContainerd (44.81s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-967316 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-967316 --driver=docker  --container-runtime=containerd: (29.248038661s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-967316"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-967316": (1.000749409s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4zd9S7aTRsOt/agent.309882" SSH_AGENT_PID="309883" DOCKER_HOST=ssh://docker@127.0.0.1:33138 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4zd9S7aTRsOt/agent.309882" SSH_AGENT_PID="309883" DOCKER_HOST=ssh://docker@127.0.0.1:33138 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4zd9S7aTRsOt/agent.309882" SSH_AGENT_PID="309883" DOCKER_HOST=ssh://docker@127.0.0.1:33138 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.236764097s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4zd9S7aTRsOt/agent.309882" SSH_AGENT_PID="309883" DOCKER_HOST=ssh://docker@127.0.0.1:33138 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-967316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-967316
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-967316: (1.961601712s)
--- PASS: TestDockerEnvContainerd (44.81s)

                                                
                                    
x
+
TestErrorSpam/setup (29.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-436362 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-436362 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-436362 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-436362 --driver=docker  --container-runtime=containerd: (29.54774011s)
--- PASS: TestErrorSpam/setup (29.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436362 --log_dir /tmp/nospam-436362 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436362 --log_dir /tmp/nospam-436362 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436362 --log_dir /tmp/nospam-436362 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (1.05s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436362 --log_dir /tmp/nospam-436362 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436362 --log_dir /tmp/nospam-436362 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436362 --log_dir /tmp/nospam-436362 status
--- PASS: TestErrorSpam/status (1.05s)

                                                
                                    
x
+
TestErrorSpam/pause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436362 --log_dir /tmp/nospam-436362 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436362 --log_dir /tmp/nospam-436362 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436362 --log_dir /tmp/nospam-436362 pause
--- PASS: TestErrorSpam/pause (1.90s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436362 --log_dir /tmp/nospam-436362 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436362 --log_dir /tmp/nospam-436362 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436362 --log_dir /tmp/nospam-436362 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436362 --log_dir /tmp/nospam-436362 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-436362 --log_dir /tmp/nospam-436362 stop: (1.272392274s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436362 --log_dir /tmp/nospam-436362 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-436362 --log_dir /tmp/nospam-436362 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19774-283126/.minikube/files/etc/test/nested/copy/288541/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (87.14s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-138958 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-138958 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m27.134890988s)
--- PASS: TestFunctional/serial/StartWithProxy (87.14s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.79s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1008 18:12:25.372056  288541 config.go:182] Loaded profile config "functional-138958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-138958 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-138958 --alsologtostderr -v=8: (5.786073716s)
functional_test.go:663: soft start took 5.786606627s for "functional-138958" cluster.
I1008 18:12:31.158420  288541 config.go:182] Loaded profile config "functional-138958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (5.79s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-138958 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-138958 cache add registry.k8s.io/pause:3.1: (1.470427373s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-138958 cache add registry.k8s.io/pause:3.3: (1.332198652s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-138958 cache add registry.k8s.io/pause:latest: (1.141318522s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-138958 /tmp/TestFunctionalserialCacheCmdcacheadd_local4004734867/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 cache add minikube-local-cache-test:functional-138958
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 cache delete minikube-local-cache-test:functional-138958
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-138958
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138958 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (278.813973ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-138958 cache reload: (1.109716214s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 kubectl -- --context functional-138958 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-138958 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-138958 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-138958 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.078715292s)
functional_test.go:761: restart took 43.078847529s for "functional-138958" cluster.
I1008 18:13:22.459705  288541 config.go:182] Loaded profile config "functional-138958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (43.08s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-138958 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-138958 logs: (1.722623005s)
--- PASS: TestFunctional/serial/LogsCmd (1.72s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 logs --file /tmp/TestFunctionalserialLogsFileCmd4268226815/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-138958 logs --file /tmp/TestFunctionalserialLogsFileCmd4268226815/001/logs.txt: (1.696395165s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.41s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-138958 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-138958
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-138958: exit status 115 (574.443037ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30681 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-138958 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138958 config get cpus: exit status 14 (92.812512ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138958 config get cpus: exit status 14 (80.874686ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-138958 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-138958 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 324852: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.14s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-138958 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-138958 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (168.370391ms)

                                                
                                                
-- stdout --
	* [functional-138958] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:14:02.561787  324385 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:14:02.561906  324385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:14:02.561917  324385 out.go:358] Setting ErrFile to fd 2...
	I1008 18:14:02.561924  324385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:14:02.562324  324385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
	I1008 18:14:02.562787  324385 out.go:352] Setting JSON to false
	I1008 18:14:02.563869  324385 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6991,"bootTime":1728404252,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1008 18:14:02.563978  324385 start.go:139] virtualization:  
	I1008 18:14:02.565880  324385 out.go:177] * [functional-138958] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1008 18:14:02.567148  324385 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:14:02.567286  324385 notify.go:220] Checking for updates...
	I1008 18:14:02.569750  324385 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:14:02.571096  324385 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig
	I1008 18:14:02.572573  324385 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube
	I1008 18:14:02.573728  324385 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 18:14:02.574809  324385 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 18:14:02.576420  324385 config.go:182] Loaded profile config "functional-138958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1008 18:14:02.577021  324385 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:14:02.602031  324385 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1008 18:14:02.602164  324385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 18:14:02.664590  324385 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-08 18:14:02.653504833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1008 18:14:02.664719  324385 docker.go:318] overlay module found
	I1008 18:14:02.666206  324385 out.go:177] * Using the docker driver based on existing profile
	I1008 18:14:02.667360  324385 start.go:297] selected driver: docker
	I1008 18:14:02.667374  324385 start.go:901] validating driver "docker" against &{Name:functional-138958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-138958 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:14:02.667479  324385 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:14:02.669243  324385 out.go:201] 
	W1008 18:14:02.670760  324385 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1008 18:14:02.671889  324385 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-138958 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-138958 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-138958 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (209.214523ms)

                                                
                                                
-- stdout --
	* [functional-138958] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:14:02.379363  324342 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:14:02.379890  324342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:14:02.379934  324342 out.go:358] Setting ErrFile to fd 2...
	I1008 18:14:02.379955  324342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:14:02.382073  324342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
	I1008 18:14:02.382670  324342 out.go:352] Setting JSON to false
	I1008 18:14:02.384730  324342 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6991,"bootTime":1728404252,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1008 18:14:02.384811  324342 start.go:139] virtualization:  
	I1008 18:14:02.388990  324342 out.go:177] * [functional-138958] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1008 18:14:02.394631  324342 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:14:02.394790  324342 notify.go:220] Checking for updates...
	I1008 18:14:02.399235  324342 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:14:02.400816  324342 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig
	I1008 18:14:02.402181  324342 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube
	I1008 18:14:02.403329  324342 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 18:14:02.404519  324342 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 18:14:02.406372  324342 config.go:182] Loaded profile config "functional-138958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1008 18:14:02.406971  324342 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:14:02.428223  324342 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1008 18:14:02.428373  324342 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 18:14:02.496283  324342 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-08 18:14:02.485986843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1008 18:14:02.496398  324342 docker.go:318] overlay module found
	I1008 18:14:02.498025  324342 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1008 18:14:02.499610  324342 start.go:297] selected driver: docker
	I1008 18:14:02.499627  324342 start.go:901] validating driver "docker" against &{Name:functional-138958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-138958 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:14:02.499743  324342 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:14:02.501425  324342 out.go:201] 
	W1008 18:14:02.502597  324342 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1008 18:14:02.503787  324342 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-138958 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-138958 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-rzgfg" [4384903f-87fd-4c16-b00a-81224dcfc34a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-rzgfg" [4384903f-87fd-4c16-b00a-81224dcfc34a] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003704542s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31508
functional_test.go:1675: http://192.168.49.2:31508: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-rzgfg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31508
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a571074a-d5fe-4a1a-9a2d-591d6df9ba84] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003859998s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-138958 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-138958 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-138958 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-138958 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e8c02db8-df9c-4a90-8ea8-8c231b138997] Pending
helpers_test.go:344: "sp-pod" [e8c02db8-df9c-4a90-8ea8-8c231b138997] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e8c02db8-df9c-4a90-8ea8-8c231b138997] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.004193956s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-138958 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-138958 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-138958 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f2c88832-9a89-4114-8e56-08f7f546951e] Pending
helpers_test.go:344: "sp-pod" [f2c88832-9a89-4114-8e56-08f7f546951e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004260505s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-138958 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh -n functional-138958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 cp functional-138958:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3081169029/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh -n functional-138958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh -n functional-138958 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/288541/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "sudo cat /etc/test/nested/copy/288541/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/288541.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "sudo cat /etc/ssl/certs/288541.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/288541.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "sudo cat /usr/share/ca-certificates/288541.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2885412.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "sudo cat /etc/ssl/certs/2885412.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2885412.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "sudo cat /usr/share/ca-certificates/2885412.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-138958 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138958 ssh "sudo systemctl is-active docker": exit status 1 (353.549748ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138958 ssh "sudo systemctl is-active crio": exit status 1 (364.534781ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-138958 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-138958 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-138958 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-138958 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 321710: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-138958 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-138958 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a7ec35fd-4f86-49b1-aaf5-a8d75ca887f5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a7ec35fd-4f86-49b1-aaf5-a8d75ca887f5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003772148s
I1008 18:13:41.626473  288541 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-138958 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.194.132 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-138958 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-138958 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-138958 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-kjr9q" [69cd6c5d-a72a-425b-99d9-e76482707785] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-kjr9q" [69cd6c5d-a72a-425b-99d9-e76482707785] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004499773s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "359.717494ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "64.432929ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "351.539614ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "57.781172ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-138958 /tmp/TestFunctionalparallelMountCmdany-port1680459575/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728411237718241524" to /tmp/TestFunctionalparallelMountCmdany-port1680459575/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728411237718241524" to /tmp/TestFunctionalparallelMountCmdany-port1680459575/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728411237718241524" to /tmp/TestFunctionalparallelMountCmdany-port1680459575/001/test-1728411237718241524
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138958 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (338.717308ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 18:13:58.059275  288541 retry.go:31] will retry after 418.991957ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  8 18:13 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  8 18:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  8 18:13 test-1728411237718241524
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh cat /mount-9p/test-1728411237718241524
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-138958 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [490c52f2-f195-4faa-ae31-d1d00d8a4ede] Pending
helpers_test.go:344: "busybox-mount" [490c52f2-f195-4faa-ae31-d1d00d8a4ede] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [490c52f2-f195-4faa-ae31-d1d00d8a4ede] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [490c52f2-f195-4faa-ae31-d1d00d8a4ede] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004078447s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-138958 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-138958 /tmp/TestFunctionalparallelMountCmdany-port1680459575/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 service list -o json
functional_test.go:1494: Took "580.171625ms" to run "out/minikube-linux-arm64 -p functional-138958 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30932
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30932
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-138958 /tmp/TestFunctionalparallelMountCmdspecific-port510714420/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138958 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (365.512272ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 18:14:05.265729  288541 retry.go:31] will retry after 589.017517ms: exit status 1
E1008 18:14:05.337518  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:14:05.343958  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:14:05.355329  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:14:05.376867  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:14:05.418311  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:14:05.499689  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:14:05.661769  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "findmnt -T /mount-9p | grep 9p"
E1008 18:14:05.983486  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-138958 /tmp/TestFunctionalparallelMountCmdspecific-port510714420/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "sudo umount -f /mount-9p"
E1008 18:14:06.625113  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138958 ssh "sudo umount -f /mount-9p": exit status 1 (265.169254ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-138958 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-138958 /tmp/TestFunctionalparallelMountCmdspecific-port510714420/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-138958 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3062628000/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-138958 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3062628000/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-138958 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3062628000/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138958 ssh "findmnt -T" /mount1: exit status 1 (696.357556ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 18:14:07.563899  288541 retry.go:31] will retry after 294.090165ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "findmnt -T" /mount1
E1008 18:14:07.906409  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-138958 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-138958 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3062628000/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-138958 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3062628000/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-138958 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3062628000/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-138958 version -o=json --components: (1.199246981s)
--- PASS: TestFunctional/parallel/Version/components (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-138958 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-138958
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-138958
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-138958 image ls --format short --alsologtostderr:
I1008 18:14:18.085870  327172 out.go:345] Setting OutFile to fd 1 ...
I1008 18:14:18.086038  327172 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 18:14:18.086051  327172 out.go:358] Setting ErrFile to fd 2...
I1008 18:14:18.086057  327172 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 18:14:18.086317  327172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
I1008 18:14:18.087012  327172 config.go:182] Loaded profile config "functional-138958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1008 18:14:18.087185  327172 config.go:182] Loaded profile config "functional-138958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1008 18:14:18.087679  327172 cli_runner.go:164] Run: docker container inspect functional-138958 --format={{.State.Status}}
I1008 18:14:18.107114  327172 ssh_runner.go:195] Run: systemctl --version
I1008 18:14:18.107332  327172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-138958
I1008 18:14:18.131469  327172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/functional-138958/id_rsa Username:docker}
I1008 18:14:18.231090  327172 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-138958 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| docker.io/library/minikube-local-cache-test | functional-138958  | sha256:5bb040 | 990B   |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kicbase/echo-server               | functional-138958  | sha256:ce2d2c | 2.17MB |
| docker.io/library/nginx                     | latest             | sha256:048e09 | 69.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/nginx                     | alpine             | sha256:577a23 | 21.5MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-138958 image ls --format table --alsologtostderr:
I1008 18:14:18.941888  327372 out.go:345] Setting OutFile to fd 1 ...
I1008 18:14:18.942048  327372 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 18:14:18.942054  327372 out.go:358] Setting ErrFile to fd 2...
I1008 18:14:18.942060  327372 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 18:14:18.942303  327372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
I1008 18:14:18.942940  327372 config.go:182] Loaded profile config "functional-138958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1008 18:14:18.943053  327372 config.go:182] Loaded profile config "functional-138958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1008 18:14:18.943539  327372 cli_runner.go:164] Run: docker container inspect functional-138958 --format={{.State.Status}}
I1008 18:14:18.963847  327372 ssh_runner.go:195] Run: systemctl --version
I1008 18:14:18.963906  327372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-138958
I1008 18:14:18.983173  327372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/functional-138958/id_rsa Username:docker}
I1008 18:14:19.080145  327372 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-138958 image ls --format json --alsologtostderr:
[{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250"],"repoTags":["docker.io/library/nginx:alpine"],"size":"21533923"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests"
:["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9","repoDigests":["docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"69600401"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d991
9f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:5bb040125f84ab60948a3bd6732e42fcca627585116902ae98e9cde684fc93d5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-138958"],"size":"990"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff00
1e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-138958"],"size":"2173567"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae3343
0bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-138958 image ls --format json --alsologtostderr:
I1008 18:14:18.649489  327316 out.go:345] Setting OutFile to fd 1 ...
I1008 18:14:18.649936  327316 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 18:14:18.649944  327316 out.go:358] Setting ErrFile to fd 2...
I1008 18:14:18.649949  327316 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 18:14:18.650366  327316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
I1008 18:14:18.651413  327316 config.go:182] Loaded profile config "functional-138958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1008 18:14:18.651553  327316 config.go:182] Loaded profile config "functional-138958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1008 18:14:18.652451  327316 cli_runner.go:164] Run: docker container inspect functional-138958 --format={{.State.Status}}
I1008 18:14:18.690832  327316 ssh_runner.go:195] Run: systemctl --version
I1008 18:14:18.690894  327316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-138958
I1008 18:14:18.713003  327316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/functional-138958/id_rsa Username:docker}
I1008 18:14:18.810150  327316 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-138958 image ls --format yaml --alsologtostderr:
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-138958
size: "2173567"
- id: sha256:5bb040125f84ab60948a3bd6732e42fcca627585116902ae98e9cde684fc93d5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-138958
size: "990"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
repoTags:
- docker.io/library/nginx:alpine
size: "21533923"
- id: sha256:048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests:
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "69600401"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-138958 image ls --format yaml --alsologtostderr:
I1008 18:14:18.377496  327223 out.go:345] Setting OutFile to fd 1 ...
I1008 18:14:18.377650  327223 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 18:14:18.377656  327223 out.go:358] Setting ErrFile to fd 2...
I1008 18:14:18.377661  327223 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 18:14:18.377935  327223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
I1008 18:14:18.378561  327223 config.go:182] Loaded profile config "functional-138958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1008 18:14:18.378703  327223 config.go:182] Loaded profile config "functional-138958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1008 18:14:18.379162  327223 cli_runner.go:164] Run: docker container inspect functional-138958 --format={{.State.Status}}
I1008 18:14:18.400675  327223 ssh_runner.go:195] Run: systemctl --version
I1008 18:14:18.400729  327223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-138958
I1008 18:14:18.431254  327223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/functional-138958/id_rsa Username:docker}
I1008 18:14:18.527637  327223 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-138958 ssh pgrep buildkitd: exit status 1 (297.996185ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image build -t localhost/my-image:functional-138958 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-138958 image build -t localhost/my-image:functional-138958 testdata/build --alsologtostderr: (3.273528885s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-138958 image build -t localhost/my-image:functional-138958 testdata/build --alsologtostderr:
I1008 18:14:18.693412  327320 out.go:345] Setting OutFile to fd 1 ...
I1008 18:14:18.694043  327320 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 18:14:18.694059  327320 out.go:358] Setting ErrFile to fd 2...
I1008 18:14:18.694065  327320 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 18:14:18.694320  327320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
I1008 18:14:18.695002  327320 config.go:182] Loaded profile config "functional-138958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1008 18:14:18.695995  327320 config.go:182] Loaded profile config "functional-138958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1008 18:14:18.696487  327320 cli_runner.go:164] Run: docker container inspect functional-138958 --format={{.State.Status}}
I1008 18:14:18.731646  327320 ssh_runner.go:195] Run: systemctl --version
I1008 18:14:18.732418  327320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-138958
I1008 18:14:18.754463  327320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/functional-138958/id_rsa Username:docker}
I1008 18:14:18.855837  327320 build_images.go:161] Building image from path: /tmp/build.4099739061.tar
I1008 18:14:18.855909  327320 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1008 18:14:18.873356  327320 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4099739061.tar
I1008 18:14:18.878228  327320 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4099739061.tar: stat -c "%s %y" /var/lib/minikube/build/build.4099739061.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4099739061.tar': No such file or directory
I1008 18:14:18.878259  327320 ssh_runner.go:362] scp /tmp/build.4099739061.tar --> /var/lib/minikube/build/build.4099739061.tar (3072 bytes)
I1008 18:14:18.903383  327320 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4099739061
I1008 18:14:18.913160  327320 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4099739061 -xf /var/lib/minikube/build/build.4099739061.tar
I1008 18:14:18.922708  327320 containerd.go:394] Building image: /var/lib/minikube/build/build.4099739061
I1008 18:14:18.922780  327320 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4099739061 --local dockerfile=/var/lib/minikube/build/build.4099739061 --output type=image,name=localhost/my-image:functional-138958
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 DONE 0.5s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:f823aaeb61ea50e351c2b814fff569029df45567e82d53d823d3b45a521aed47
#8 exporting manifest sha256:f823aaeb61ea50e351c2b814fff569029df45567e82d53d823d3b45a521aed47 0.0s done
#8 exporting config sha256:a38d634936002fd33caec5aa82b57b2b06870cb1893df0a21031cf5ffe84954d 0.0s done
#8 naming to localhost/my-image:functional-138958 done
#8 DONE 0.2s
I1008 18:14:21.837917  327320 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4099739061 --local dockerfile=/var/lib/minikube/build/build.4099739061 --output type=image,name=localhost/my-image:functional-138958: (2.915111728s)
I1008 18:14:21.837998  327320 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4099739061
I1008 18:14:21.847841  327320 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4099739061.tar
I1008 18:14:21.856886  327320 build_images.go:217] Built localhost/my-image:functional-138958 from /tmp/build.4099739061.tar
I1008 18:14:21.856918  327320 build_images.go:133] succeeded building to: functional-138958
I1008 18:14:21.856924  327320 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
E1008 18:14:10.468410  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-138958
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image load --daemon kicbase/echo-server:functional-138958 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-138958 image load --daemon kicbase/echo-server:functional-138958 --alsologtostderr: (1.164035294s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image load --daemon kicbase/echo-server:functional-138958 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-138958 image load --daemon kicbase/echo-server:functional-138958 --alsologtostderr: (1.095355971s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-138958
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image load --daemon kicbase/echo-server:functional-138958 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image save kicbase/echo-server:functional-138958 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
2024/10/08 18:14:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image rm kicbase/echo-server:functional-138958 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image ls
E1008 18:14:15.589812  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-138958
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-138958 image save --daemon kicbase/echo-server:functional-138958 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-138958
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-138958
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-138958
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-138958
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (117.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-860946 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1008 18:14:25.831090  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:14:46.312491  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:15:27.274670  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-860946 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m56.445043321s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (117.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (30.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- rollout status deployment/busybox
E1008 18:16:49.197854  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-860946 -- rollout status deployment/busybox: (27.839158435s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- exec busybox-7dff88458-5rsxm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- exec busybox-7dff88458-99dm5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- exec busybox-7dff88458-n67br -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- exec busybox-7dff88458-5rsxm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- exec busybox-7dff88458-99dm5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- exec busybox-7dff88458-n67br -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- exec busybox-7dff88458-5rsxm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- exec busybox-7dff88458-99dm5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- exec busybox-7dff88458-n67br -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (30.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- exec busybox-7dff88458-5rsxm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- exec busybox-7dff88458-5rsxm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- exec busybox-7dff88458-99dm5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- exec busybox-7dff88458-99dm5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- exec busybox-7dff88458-n67br -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-860946 -- exec busybox-7dff88458-n67br -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-860946 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-860946 -v=7 --alsologtostderr: (23.360675492s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-860946 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-860946 status --output json -v=7 --alsologtostderr: (1.015398s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp testdata/cp-test.txt ha-860946:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp ha-860946:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4191350908/001/cp-test_ha-860946.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp ha-860946:/home/docker/cp-test.txt ha-860946-m02:/home/docker/cp-test_ha-860946_ha-860946-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m02 "sudo cat /home/docker/cp-test_ha-860946_ha-860946-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp ha-860946:/home/docker/cp-test.txt ha-860946-m03:/home/docker/cp-test_ha-860946_ha-860946-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m03 "sudo cat /home/docker/cp-test_ha-860946_ha-860946-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp ha-860946:/home/docker/cp-test.txt ha-860946-m04:/home/docker/cp-test_ha-860946_ha-860946-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m04 "sudo cat /home/docker/cp-test_ha-860946_ha-860946-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp testdata/cp-test.txt ha-860946-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp ha-860946-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4191350908/001/cp-test_ha-860946-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp ha-860946-m02:/home/docker/cp-test.txt ha-860946:/home/docker/cp-test_ha-860946-m02_ha-860946.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946 "sudo cat /home/docker/cp-test_ha-860946-m02_ha-860946.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp ha-860946-m02:/home/docker/cp-test.txt ha-860946-m03:/home/docker/cp-test_ha-860946-m02_ha-860946-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m03 "sudo cat /home/docker/cp-test_ha-860946-m02_ha-860946-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp ha-860946-m02:/home/docker/cp-test.txt ha-860946-m04:/home/docker/cp-test_ha-860946-m02_ha-860946-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m04 "sudo cat /home/docker/cp-test_ha-860946-m02_ha-860946-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp testdata/cp-test.txt ha-860946-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp ha-860946-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4191350908/001/cp-test_ha-860946-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp ha-860946-m03:/home/docker/cp-test.txt ha-860946:/home/docker/cp-test_ha-860946-m03_ha-860946.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946 "sudo cat /home/docker/cp-test_ha-860946-m03_ha-860946.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp ha-860946-m03:/home/docker/cp-test.txt ha-860946-m02:/home/docker/cp-test_ha-860946-m03_ha-860946-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m02 "sudo cat /home/docker/cp-test_ha-860946-m03_ha-860946-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp ha-860946-m03:/home/docker/cp-test.txt ha-860946-m04:/home/docker/cp-test_ha-860946-m03_ha-860946-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m04 "sudo cat /home/docker/cp-test_ha-860946-m03_ha-860946-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp testdata/cp-test.txt ha-860946-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp ha-860946-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4191350908/001/cp-test_ha-860946-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp ha-860946-m04:/home/docker/cp-test.txt ha-860946:/home/docker/cp-test_ha-860946-m04_ha-860946.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946 "sudo cat /home/docker/cp-test_ha-860946-m04_ha-860946.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp ha-860946-m04:/home/docker/cp-test.txt ha-860946-m02:/home/docker/cp-test_ha-860946-m04_ha-860946-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m02 "sudo cat /home/docker/cp-test_ha-860946-m04_ha-860946-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 cp ha-860946-m04:/home/docker/cp-test.txt ha-860946-m03:/home/docker/cp-test_ha-860946-m04_ha-860946-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 ssh -n ha-860946-m03 "sudo cat /home/docker/cp-test_ha-860946-m04_ha-860946-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-860946 node stop m02 -v=7 --alsologtostderr: (12.124049797s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-860946 status -v=7 --alsologtostderr: exit status 7 (757.277947ms)

                                                
                                                
-- stdout --
	ha-860946
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-860946-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-860946-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-860946-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:17:50.646206  343524 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:17:50.646398  343524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:17:50.646412  343524 out.go:358] Setting ErrFile to fd 2...
	I1008 18:17:50.646419  343524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:17:50.646742  343524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
	I1008 18:17:50.647004  343524 out.go:352] Setting JSON to false
	I1008 18:17:50.647034  343524 mustload.go:65] Loading cluster: ha-860946
	I1008 18:17:50.647493  343524 notify.go:220] Checking for updates...
	I1008 18:17:50.647639  343524 config.go:182] Loaded profile config "ha-860946": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1008 18:17:50.647658  343524 status.go:174] checking status of ha-860946 ...
	I1008 18:17:50.648324  343524 cli_runner.go:164] Run: docker container inspect ha-860946 --format={{.State.Status}}
	I1008 18:17:50.671541  343524 status.go:371] ha-860946 host status = "Running" (err=<nil>)
	I1008 18:17:50.671574  343524 host.go:66] Checking if "ha-860946" exists ...
	I1008 18:17:50.671876  343524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-860946
	I1008 18:17:50.694397  343524 host.go:66] Checking if "ha-860946" exists ...
	I1008 18:17:50.694738  343524 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 18:17:50.694791  343524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-860946
	I1008 18:17:50.715958  343524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/ha-860946/id_rsa Username:docker}
	I1008 18:17:50.807850  343524 ssh_runner.go:195] Run: systemctl --version
	I1008 18:17:50.812526  343524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:17:50.824192  343524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 18:17:50.888523  343524 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-08 18:17:50.877852633 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1008 18:17:50.889157  343524 kubeconfig.go:125] found "ha-860946" server: "https://192.168.49.254:8443"
	I1008 18:17:50.889207  343524 api_server.go:166] Checking apiserver status ...
	I1008 18:17:50.889255  343524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 18:17:50.901414  343524 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1404/cgroup
	I1008 18:17:50.911695  343524 api_server.go:182] apiserver freezer: "7:freezer:/docker/818ea193e7ec7161c190bea740cdd9fc0a508e1f73bfdda0c3ae419808c47c8c/kubepods/burstable/podb49ec00d6e92056fffd98e8fed610960/04bb56befc0355c03a67b7736b21e3a4ef80df9d78bbfd4dcfcd35310b062aff"
	I1008 18:17:50.911764  343524 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/818ea193e7ec7161c190bea740cdd9fc0a508e1f73bfdda0c3ae419808c47c8c/kubepods/burstable/podb49ec00d6e92056fffd98e8fed610960/04bb56befc0355c03a67b7736b21e3a4ef80df9d78bbfd4dcfcd35310b062aff/freezer.state
	I1008 18:17:50.921320  343524 api_server.go:204] freezer state: "THAWED"
	I1008 18:17:50.921350  343524 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1008 18:17:50.930129  343524 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1008 18:17:50.930161  343524 status.go:463] ha-860946 apiserver status = Running (err=<nil>)
	I1008 18:17:50.930172  343524 status.go:176] ha-860946 status: &{Name:ha-860946 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 18:17:50.930190  343524 status.go:174] checking status of ha-860946-m02 ...
	I1008 18:17:50.930514  343524 cli_runner.go:164] Run: docker container inspect ha-860946-m02 --format={{.State.Status}}
	I1008 18:17:50.952088  343524 status.go:371] ha-860946-m02 host status = "Stopped" (err=<nil>)
	I1008 18:17:50.952114  343524 status.go:384] host is not running, skipping remaining checks
	I1008 18:17:50.952121  343524 status.go:176] ha-860946-m02 status: &{Name:ha-860946-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 18:17:50.952141  343524 status.go:174] checking status of ha-860946-m03 ...
	I1008 18:17:50.952485  343524 cli_runner.go:164] Run: docker container inspect ha-860946-m03 --format={{.State.Status}}
	I1008 18:17:50.972550  343524 status.go:371] ha-860946-m03 host status = "Running" (err=<nil>)
	I1008 18:17:50.972600  343524 host.go:66] Checking if "ha-860946-m03" exists ...
	I1008 18:17:50.972934  343524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-860946-m03
	I1008 18:17:50.990739  343524 host.go:66] Checking if "ha-860946-m03" exists ...
	I1008 18:17:50.991060  343524 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 18:17:50.991107  343524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-860946-m03
	I1008 18:17:51.024699  343524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/ha-860946-m03/id_rsa Username:docker}
	I1008 18:17:51.123419  343524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:17:51.136840  343524 kubeconfig.go:125] found "ha-860946" server: "https://192.168.49.254:8443"
	I1008 18:17:51.136870  343524 api_server.go:166] Checking apiserver status ...
	I1008 18:17:51.136912  343524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 18:17:51.148962  343524 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1321/cgroup
	I1008 18:17:51.158558  343524 api_server.go:182] apiserver freezer: "7:freezer:/docker/931db1a7454a53a656a5b797e7ea8338b994f8b03f81dea2ca2822d751e9bf1b/kubepods/burstable/pod0a49a4d53460575a104defb2ddd0a274/b2d49c88ca7192fd1558afb9278bdbc856d4bf3f416722921463e30970abb6c0"
	I1008 18:17:51.158631  343524 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/931db1a7454a53a656a5b797e7ea8338b994f8b03f81dea2ca2822d751e9bf1b/kubepods/burstable/pod0a49a4d53460575a104defb2ddd0a274/b2d49c88ca7192fd1558afb9278bdbc856d4bf3f416722921463e30970abb6c0/freezer.state
	I1008 18:17:51.167633  343524 api_server.go:204] freezer state: "THAWED"
	I1008 18:17:51.167667  343524 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1008 18:17:51.175514  343524 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1008 18:17:51.175541  343524 status.go:463] ha-860946-m03 apiserver status = Running (err=<nil>)
	I1008 18:17:51.175550  343524 status.go:176] ha-860946-m03 status: &{Name:ha-860946-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 18:17:51.175602  343524 status.go:174] checking status of ha-860946-m04 ...
	I1008 18:17:51.175946  343524 cli_runner.go:164] Run: docker container inspect ha-860946-m04 --format={{.State.Status}}
	I1008 18:17:51.192526  343524 status.go:371] ha-860946-m04 host status = "Running" (err=<nil>)
	I1008 18:17:51.192555  343524 host.go:66] Checking if "ha-860946-m04" exists ...
	I1008 18:17:51.192872  343524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-860946-m04
	I1008 18:17:51.217049  343524 host.go:66] Checking if "ha-860946-m04" exists ...
	I1008 18:17:51.217359  343524 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 18:17:51.217418  343524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-860946-m04
	I1008 18:17:51.234409  343524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/ha-860946-m04/id_rsa Username:docker}
	I1008 18:17:51.327372  343524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:17:51.339889  343524 status.go:176] ha-860946-m04 status: &{Name:ha-860946-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-860946 node start m02 -v=7 --alsologtostderr: (29.94070602s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-860946 status -v=7 --alsologtostderr: (1.122618592s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (132.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-860946 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-860946 -v=7 --alsologtostderr
E1008 18:18:32.186367  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:18:32.192745  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:18:32.204168  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:18:32.225560  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:18:32.267065  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:18:32.348483  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:18:32.509963  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:18:32.831600  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:18:33.473557  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:18:34.755313  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:18:37.317832  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:18:42.439397  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:18:52.681628  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-860946 -v=7 --alsologtostderr: (37.049636304s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-860946 --wait=true -v=7 --alsologtostderr
E1008 18:19:05.337419  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:19:13.164009  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:19:33.039792  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:19:54.126218  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-860946 --wait=true -v=7 --alsologtostderr: (1m35.47519615s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-860946
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (132.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-860946 node delete m03 -v=7 --alsologtostderr: (9.733281595s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 stop -v=7 --alsologtostderr
E1008 18:21:16.047676  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-860946 stop -v=7 --alsologtostderr: (35.962049117s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-860946 status -v=7 --alsologtostderr: exit status 7 (114.046283ms)

                                                
                                                
-- stdout --
	ha-860946
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-860946-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-860946-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:21:24.350430  357824 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:21:24.350650  357824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:21:24.350678  357824 out.go:358] Setting ErrFile to fd 2...
	I1008 18:21:24.350698  357824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:21:24.351107  357824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
	I1008 18:21:24.351372  357824 out.go:352] Setting JSON to false
	I1008 18:21:24.351425  357824 mustload.go:65] Loading cluster: ha-860946
	I1008 18:21:24.352362  357824 notify.go:220] Checking for updates...
	I1008 18:21:24.352497  357824 config.go:182] Loaded profile config "ha-860946": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1008 18:21:24.352526  357824 status.go:174] checking status of ha-860946 ...
	I1008 18:21:24.353123  357824 cli_runner.go:164] Run: docker container inspect ha-860946 --format={{.State.Status}}
	I1008 18:21:24.369752  357824 status.go:371] ha-860946 host status = "Stopped" (err=<nil>)
	I1008 18:21:24.369774  357824 status.go:384] host is not running, skipping remaining checks
	I1008 18:21:24.369782  357824 status.go:176] ha-860946 status: &{Name:ha-860946 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 18:21:24.369812  357824 status.go:174] checking status of ha-860946-m02 ...
	I1008 18:21:24.370106  357824 cli_runner.go:164] Run: docker container inspect ha-860946-m02 --format={{.State.Status}}
	I1008 18:21:24.386241  357824 status.go:371] ha-860946-m02 host status = "Stopped" (err=<nil>)
	I1008 18:21:24.386265  357824 status.go:384] host is not running, skipping remaining checks
	I1008 18:21:24.386272  357824 status.go:176] ha-860946-m02 status: &{Name:ha-860946-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 18:21:24.386296  357824 status.go:174] checking status of ha-860946-m04 ...
	I1008 18:21:24.386605  357824 cli_runner.go:164] Run: docker container inspect ha-860946-m04 --format={{.State.Status}}
	I1008 18:21:24.412384  357824 status.go:371] ha-860946-m04 host status = "Stopped" (err=<nil>)
	I1008 18:21:24.412405  357824 status.go:384] host is not running, skipping remaining checks
	I1008 18:21:24.412412  357824 status.go:176] ha-860946-m04 status: &{Name:ha-860946-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (52.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-860946 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-860946 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.053905696s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (52.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-860946 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-860946 --control-plane -v=7 --alsologtostderr: (43.436242284s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-860946 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.00577542s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.42s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-915271 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1008 18:23:32.186586  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-915271 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (47.41380454s)
--- PASS: TestJSONOutput/start/Command (47.42s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-915271 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-915271 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-915271 --output=json --user=testUser
E1008 18:23:59.890657  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:24:05.337300  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-915271 --output=json --user=testUser: (5.798084648s)
--- PASS: TestJSONOutput/stop/Command (5.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-590929 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-590929 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (86.942363ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bf10911c-a7c9-48f8-a54e-603a2db14c23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-590929] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5397fc6b-68e0-4847-9b24-1c7ce0d7bebc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19774"}}
	{"specversion":"1.0","id":"73521a7a-84f4-426f-8bb2-9e07d1c59c51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5561165a-d605-4ec5-b9a0-76430ab50082","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig"}}
	{"specversion":"1.0","id":"a96e0a39-ec91-4721-a251-4d69a2aad6a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube"}}
	{"specversion":"1.0","id":"c0c75359-1301-4daf-9ed7-7c951a9c7409","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4f2df23e-4f84-40f9-aa0b-6436eb59a476","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"95a102da-49c1-466e-aba0-b124d91ff2a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-590929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-590929
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.15s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-512593 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-512593 --network=: (36.053027491s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-512593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-512593
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-512593: (2.081150432s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.15s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-861201 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-861201 --network=bridge: (31.83134428s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-861201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-861201
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-861201: (1.958523786s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.81s)

                                                
                                    
x
+
TestKicExistingNetwork (30.09s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1008 18:25:22.575484  288541 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1008 18:25:22.596378  288541 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1008 18:25:22.596455  288541 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1008 18:25:22.596473  288541 cli_runner.go:164] Run: docker network inspect existing-network
W1008 18:25:22.616540  288541 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1008 18:25:22.616582  288541 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1008 18:25:22.616596  288541 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1008 18:25:22.616702  288541 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1008 18:25:22.632297  288541 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-83a053b44b9f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:28:14:5b:20} reservation:<nil>}
I1008 18:25:22.632677  288541 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400000ee90}
I1008 18:25:22.632700  288541 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1008 18:25:22.632749  288541 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1008 18:25:22.701051  288541 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-073819 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-073819 --network=existing-network: (27.904780444s)
helpers_test.go:175: Cleaning up "existing-network-073819" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-073819
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-073819: (2.031456249s)
I1008 18:25:52.652726  288541 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (30.09s)

                                                
                                    
x
+
TestKicCustomSubnet (32.78s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-504612 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-504612 --subnet=192.168.60.0/24: (30.625965383s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-504612 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-504612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-504612
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-504612: (2.127389325s)
--- PASS: TestKicCustomSubnet (32.78s)

                                                
                                    
x
+
TestKicStaticIP (32.14s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-851733 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-851733 --static-ip=192.168.200.200: (30.278668278s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-851733 ip
helpers_test.go:175: Cleaning up "static-ip-851733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-851733
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-851733: (1.712257179s)
--- PASS: TestKicStaticIP (32.14s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (66.47s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-544005 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-544005 --driver=docker  --container-runtime=containerd: (31.289226774s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-546655 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-546655 --driver=docker  --container-runtime=containerd: (29.806376581s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-544005
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-546655
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-546655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-546655
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-546655: (2.049924487s)
helpers_test.go:175: Cleaning up "first-544005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-544005
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-544005: (1.937317303s)
--- PASS: TestMinikubeProfile (66.47s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-278848 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-278848 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.901434915s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-278848 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-280612 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-280612 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.260471995s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-280612 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-278848 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-278848 --alsologtostderr -v=5: (1.62899331s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-280612 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-280612
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-280612: (1.205494867s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-280612
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-280612: (6.221218016s)
--- PASS: TestMountStart/serial/RestartStopped (7.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-280612 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-302708 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1008 18:28:32.183946  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:29:05.337189  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-302708 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m8.861192948s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (16.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-302708 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-302708 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-302708 -- rollout status deployment/busybox: (14.112515745s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-302708 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-302708 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-302708 -- exec busybox-7dff88458-cfxlc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-302708 -- exec busybox-7dff88458-zfjfx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-302708 -- exec busybox-7dff88458-cfxlc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-302708 -- exec busybox-7dff88458-zfjfx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-302708 -- exec busybox-7dff88458-cfxlc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-302708 -- exec busybox-7dff88458-zfjfx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (16.04s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-302708 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-302708 -- exec busybox-7dff88458-cfxlc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-302708 -- exec busybox-7dff88458-cfxlc -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-302708 -- exec busybox-7dff88458-zfjfx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-302708 -- exec busybox-7dff88458-zfjfx -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-302708 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-302708 -v 3 --alsologtostderr: (15.207248506s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.85s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-302708 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 cp testdata/cp-test.txt multinode-302708:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 cp multinode-302708:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3956543111/001/cp-test_multinode-302708.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 cp multinode-302708:/home/docker/cp-test.txt multinode-302708-m02:/home/docker/cp-test_multinode-302708_multinode-302708-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708-m02 "sudo cat /home/docker/cp-test_multinode-302708_multinode-302708-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 cp multinode-302708:/home/docker/cp-test.txt multinode-302708-m03:/home/docker/cp-test_multinode-302708_multinode-302708-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708-m03 "sudo cat /home/docker/cp-test_multinode-302708_multinode-302708-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 cp testdata/cp-test.txt multinode-302708-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 cp multinode-302708-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3956543111/001/cp-test_multinode-302708-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 cp multinode-302708-m02:/home/docker/cp-test.txt multinode-302708:/home/docker/cp-test_multinode-302708-m02_multinode-302708.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708 "sudo cat /home/docker/cp-test_multinode-302708-m02_multinode-302708.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 cp multinode-302708-m02:/home/docker/cp-test.txt multinode-302708-m03:/home/docker/cp-test_multinode-302708-m02_multinode-302708-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708-m03 "sudo cat /home/docker/cp-test_multinode-302708-m02_multinode-302708-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 cp testdata/cp-test.txt multinode-302708-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 cp multinode-302708-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3956543111/001/cp-test_multinode-302708-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 cp multinode-302708-m03:/home/docker/cp-test.txt multinode-302708:/home/docker/cp-test_multinode-302708-m03_multinode-302708.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708 "sudo cat /home/docker/cp-test_multinode-302708-m03_multinode-302708.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 cp multinode-302708-m03:/home/docker/cp-test.txt multinode-302708-m02:/home/docker/cp-test_multinode-302708-m03_multinode-302708-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 ssh -n multinode-302708-m02 "sudo cat /home/docker/cp-test_multinode-302708-m03_multinode-302708-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-302708 node stop m03: (1.221501375s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-302708 status: exit status 7 (501.477028ms)

                                                
                                                
-- stdout --
	multinode-302708
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-302708-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-302708-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-302708 status --alsologtostderr: exit status 7 (515.40337ms)

                                                
                                                
-- stdout --
	multinode-302708
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-302708-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-302708-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:30:23.883853  411206 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:30:23.884035  411206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:30:23.884063  411206 out.go:358] Setting ErrFile to fd 2...
	I1008 18:30:23.884083  411206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:30:23.884352  411206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
	I1008 18:30:23.884594  411206 out.go:352] Setting JSON to false
	I1008 18:30:23.884673  411206 mustload.go:65] Loading cluster: multinode-302708
	I1008 18:30:23.884742  411206 notify.go:220] Checking for updates...
	I1008 18:30:23.885114  411206 config.go:182] Loaded profile config "multinode-302708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1008 18:30:23.885130  411206 status.go:174] checking status of multinode-302708 ...
	I1008 18:30:23.886176  411206 cli_runner.go:164] Run: docker container inspect multinode-302708 --format={{.State.Status}}
	I1008 18:30:23.906148  411206 status.go:371] multinode-302708 host status = "Running" (err=<nil>)
	I1008 18:30:23.906184  411206 host.go:66] Checking if "multinode-302708" exists ...
	I1008 18:30:23.906526  411206 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-302708
	I1008 18:30:23.936353  411206 host.go:66] Checking if "multinode-302708" exists ...
	I1008 18:30:23.936778  411206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 18:30:23.936861  411206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-302708
	I1008 18:30:23.956235  411206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/multinode-302708/id_rsa Username:docker}
	I1008 18:30:24.047110  411206 ssh_runner.go:195] Run: systemctl --version
	I1008 18:30:24.051653  411206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:30:24.063846  411206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 18:30:24.117892  411206 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-08 18:30:24.107208102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1008 18:30:24.118528  411206 kubeconfig.go:125] found "multinode-302708" server: "https://192.168.67.2:8443"
	I1008 18:30:24.118573  411206 api_server.go:166] Checking apiserver status ...
	I1008 18:30:24.118632  411206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 18:30:24.130378  411206 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1413/cgroup
	I1008 18:30:24.141493  411206 api_server.go:182] apiserver freezer: "7:freezer:/docker/328e96a3c06ab1cf9264ddce63334dd5b0e865d868324c5ea2accf88a5d35b9a/kubepods/burstable/podba684761d6f319da090d7093c1aec14e/30fff77213feafa00498c7d574b9a986e49e32d9c64ec7be88e0dc262f8f1d57"
	I1008 18:30:24.141578  411206 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/328e96a3c06ab1cf9264ddce63334dd5b0e865d868324c5ea2accf88a5d35b9a/kubepods/burstable/podba684761d6f319da090d7093c1aec14e/30fff77213feafa00498c7d574b9a986e49e32d9c64ec7be88e0dc262f8f1d57/freezer.state
	I1008 18:30:24.152039  411206 api_server.go:204] freezer state: "THAWED"
	I1008 18:30:24.152073  411206 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1008 18:30:24.161256  411206 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1008 18:30:24.161284  411206 status.go:463] multinode-302708 apiserver status = Running (err=<nil>)
	I1008 18:30:24.161294  411206 status.go:176] multinode-302708 status: &{Name:multinode-302708 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 18:30:24.161312  411206 status.go:174] checking status of multinode-302708-m02 ...
	I1008 18:30:24.161648  411206 cli_runner.go:164] Run: docker container inspect multinode-302708-m02 --format={{.State.Status}}
	I1008 18:30:24.178555  411206 status.go:371] multinode-302708-m02 host status = "Running" (err=<nil>)
	I1008 18:30:24.178584  411206 host.go:66] Checking if "multinode-302708-m02" exists ...
	I1008 18:30:24.178879  411206 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-302708-m02
	I1008 18:30:24.194806  411206 host.go:66] Checking if "multinode-302708-m02" exists ...
	I1008 18:30:24.195120  411206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 18:30:24.195166  411206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-302708-m02
	I1008 18:30:24.211820  411206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/19774-283126/.minikube/machines/multinode-302708-m02/id_rsa Username:docker}
	I1008 18:30:24.306690  411206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:30:24.318396  411206 status.go:176] multinode-302708-m02 status: &{Name:multinode-302708-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1008 18:30:24.318431  411206 status.go:174] checking status of multinode-302708-m03 ...
	I1008 18:30:24.318765  411206 cli_runner.go:164] Run: docker container inspect multinode-302708-m03 --format={{.State.Status}}
	I1008 18:30:24.335109  411206 status.go:371] multinode-302708-m03 host status = "Stopped" (err=<nil>)
	I1008 18:30:24.335133  411206 status.go:384] host is not running, skipping remaining checks
	I1008 18:30:24.335141  411206 status.go:176] multinode-302708-m03 status: &{Name:multinode-302708-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 node start m03 -v=7 --alsologtostderr
E1008 18:30:28.401466  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-302708 node start m03 -v=7 --alsologtostderr: (8.755710198s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-302708
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-302708
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-302708: (24.95310057s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-302708 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-302708 --wait=true -v=8 --alsologtostderr: (54.971053021s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-302708
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-302708 node delete m03: (4.564926749s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-302708 stop: (23.852147539s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-302708 status: exit status 7 (91.540428ms)

                                                
                                                
-- stdout --
	multinode-302708
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-302708-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-302708 status --alsologtostderr: exit status 7 (96.102884ms)

                                                
                                                
-- stdout --
	multinode-302708
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-302708-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:32:23.122656  419169 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:32:23.122856  419169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:32:23.122884  419169 out.go:358] Setting ErrFile to fd 2...
	I1008 18:32:23.122903  419169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:32:23.123307  419169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
	I1008 18:32:23.123800  419169 out.go:352] Setting JSON to false
	I1008 18:32:23.123841  419169 mustload.go:65] Loading cluster: multinode-302708
	I1008 18:32:23.124156  419169 notify.go:220] Checking for updates...
	I1008 18:32:23.124609  419169 config.go:182] Loaded profile config "multinode-302708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1008 18:32:23.124651  419169 status.go:174] checking status of multinode-302708 ...
	I1008 18:32:23.125262  419169 cli_runner.go:164] Run: docker container inspect multinode-302708 --format={{.State.Status}}
	I1008 18:32:23.143504  419169 status.go:371] multinode-302708 host status = "Stopped" (err=<nil>)
	I1008 18:32:23.143526  419169 status.go:384] host is not running, skipping remaining checks
	I1008 18:32:23.143533  419169 status.go:176] multinode-302708 status: &{Name:multinode-302708 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 18:32:23.143568  419169 status.go:174] checking status of multinode-302708-m02 ...
	I1008 18:32:23.143880  419169 cli_runner.go:164] Run: docker container inspect multinode-302708-m02 --format={{.State.Status}}
	I1008 18:32:23.160064  419169 status.go:371] multinode-302708-m02 host status = "Stopped" (err=<nil>)
	I1008 18:32:23.160086  419169 status.go:384] host is not running, skipping remaining checks
	I1008 18:32:23.160093  419169 status.go:176] multinode-302708-m02 status: &{Name:multinode-302708-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-302708 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-302708 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.695799964s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-302708 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.36s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-302708
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-302708-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-302708-m02 --driver=docker  --container-runtime=containerd: exit status 14 (88.085705ms)

                                                
                                                
-- stdout --
	* [multinode-302708-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-302708-m02' is duplicated with machine name 'multinode-302708-m02' in profile 'multinode-302708'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-302708-m03 --driver=docker  --container-runtime=containerd
E1008 18:33:32.183892  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-302708-m03 --driver=docker  --container-runtime=containerd: (29.446182287s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-302708
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-302708: exit status 80 (302.569448ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-302708 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-302708-m03 already exists in multinode-302708-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-302708-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-302708-m03: (1.974070556s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.86s)

                                                
                                    
x
+
TestPreload (112.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-372209 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E1008 18:34:05.337556  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:34:55.252800  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-372209 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m12.026444325s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-372209 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-372209 image pull gcr.io/k8s-minikube/busybox: (2.057555982s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-372209
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-372209: (12.08996255s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-372209 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-372209 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (23.080009844s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-372209 image list
helpers_test.go:175: Cleaning up "test-preload-372209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-372209
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-372209: (2.536587342s)
--- PASS: TestPreload (112.12s)

                                                
                                    
x
+
TestScheduledStopUnix (107.71s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-695931 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-695931 --memory=2048 --driver=docker  --container-runtime=containerd: (30.628438901s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-695931 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-695931 -n scheduled-stop-695931
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-695931 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1008 18:36:12.548387  288541 retry.go:31] will retry after 146.871µs: open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/scheduled-stop-695931/pid: no such file or directory
I1008 18:36:12.548807  288541 retry.go:31] will retry after 182.597µs: open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/scheduled-stop-695931/pid: no such file or directory
I1008 18:36:12.549754  288541 retry.go:31] will retry after 119.868µs: open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/scheduled-stop-695931/pid: no such file or directory
I1008 18:36:12.550806  288541 retry.go:31] will retry after 403.736µs: open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/scheduled-stop-695931/pid: no such file or directory
I1008 18:36:12.551873  288541 retry.go:31] will retry after 525.921µs: open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/scheduled-stop-695931/pid: no such file or directory
I1008 18:36:12.552970  288541 retry.go:31] will retry after 1.029044ms: open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/scheduled-stop-695931/pid: no such file or directory
I1008 18:36:12.554073  288541 retry.go:31] will retry after 959.94µs: open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/scheduled-stop-695931/pid: no such file or directory
I1008 18:36:12.555179  288541 retry.go:31] will retry after 1.309963ms: open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/scheduled-stop-695931/pid: no such file or directory
I1008 18:36:12.557349  288541 retry.go:31] will retry after 3.02895ms: open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/scheduled-stop-695931/pid: no such file or directory
I1008 18:36:12.560486  288541 retry.go:31] will retry after 3.993433ms: open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/scheduled-stop-695931/pid: no such file or directory
I1008 18:36:12.564707  288541 retry.go:31] will retry after 7.625425ms: open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/scheduled-stop-695931/pid: no such file or directory
I1008 18:36:12.572934  288541 retry.go:31] will retry after 7.089858ms: open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/scheduled-stop-695931/pid: no such file or directory
I1008 18:36:12.581189  288541 retry.go:31] will retry after 12.897895ms: open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/scheduled-stop-695931/pid: no such file or directory
I1008 18:36:12.594441  288541 retry.go:31] will retry after 13.23848ms: open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/scheduled-stop-695931/pid: no such file or directory
I1008 18:36:12.608747  288541 retry.go:31] will retry after 27.471901ms: open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/scheduled-stop-695931/pid: no such file or directory
I1008 18:36:12.637443  288541 retry.go:31] will retry after 54.798839ms: open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/scheduled-stop-695931/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-695931 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-695931 -n scheduled-stop-695931
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-695931
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-695931 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-695931
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-695931: exit status 7 (74.770346ms)

                                                
                                                
-- stdout --
	scheduled-stop-695931
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-695931 -n scheduled-stop-695931
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-695931 -n scheduled-stop-695931: exit status 7 (66.55529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-695931" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-695931
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-695931: (5.508905054s)
--- PASS: TestScheduledStopUnix (107.71s)

                                                
                                    
x
+
TestInsufficientStorage (12.46s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-266740 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-266740 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.021496313s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6bbbf79e-bb12-4a9d-89c5-000d31835987","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-266740] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"64ea9b8c-2bdc-45ea-b290-cd330f2b3569","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19774"}}
	{"specversion":"1.0","id":"3a5d0469-0205-4063-af73-0bf3e2085ed3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"55bb3ca0-f87c-4678-85a1-3e11204382ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig"}}
	{"specversion":"1.0","id":"328cda47-429f-4d74-a711-267640cb6b15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube"}}
	{"specversion":"1.0","id":"dfb61b97-8833-49ef-9546-a7efbbcb8c04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d0d6062b-476e-4337-8f5c-bd1f5d15d6ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0d7d8554-c5ba-44e7-86be-1c87e348da41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0d08b68e-bc17-49f6-b8b9-952c6b184079","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c791c833-ed55-451d-a69c-3237cb273944","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c96ceeef-8b57-4d9b-a252-1b93361058ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5b89cdc6-94dd-4284-a612-340f2205e63c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-266740\" primary control-plane node in \"insufficient-storage-266740\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fe83925e-3c2e-44d5-bee2-c734681ec0d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1728382586-19774 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"41577c48-a600-48a3-b06d-c3f675a7d068","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"037b0413-3763-416b-8929-0356b7c26d3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-266740 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-266740 --output=json --layout=cluster: exit status 7 (294.573916ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-266740","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-266740","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 18:37:39.403005  437758 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-266740" does not appear in /home/jenkins/minikube-integration/19774-283126/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-266740 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-266740 --output=json --layout=cluster: exit status 7 (280.02666ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-266740","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-266740","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 18:37:39.683656  437819 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-266740" does not appear in /home/jenkins/minikube-integration/19774-283126/kubeconfig
	E1008 18:37:39.693846  437819 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/insufficient-storage-266740/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-266740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-266740
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-266740: (1.862191601s)
--- PASS: TestInsufficientStorage (12.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (85.39s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3238330432 start -p running-upgrade-302717 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3238330432 start -p running-upgrade-302717 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.343487221s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-302717 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1008 18:43:32.183899  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-302717 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.209940243s)
helpers_test.go:175: Cleaning up "running-upgrade-302717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-302717
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-302717: (2.988018313s)
--- PASS: TestRunningBinaryUpgrade (85.39s)

                                                
                                    
x
+
TestKubernetesUpgrade (343.64s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-495265 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1008 18:39:05.337453  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-495265 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (58.74931639s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-495265
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-495265: (1.545868299s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-495265 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-495265 status --format={{.Host}}: exit status 7 (84.515593ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-495265 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-495265 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m35.447581685s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-495265 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-495265 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-495265 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (92.197707ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-495265] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-495265
	    minikube start -p kubernetes-upgrade-495265 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4952652 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-495265 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-495265 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-495265 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.498736514s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-495265" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-495265
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-495265: (2.127647321s)
--- PASS: TestKubernetesUpgrade (343.64s)

                                                
                                    
x
+
TestMissingContainerUpgrade (167.43s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.668100805 start -p missing-upgrade-427666 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.668100805 start -p missing-upgrade-427666 --memory=2200 --driver=docker  --container-runtime=containerd: (1m34.495616607s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-427666
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-427666: (10.302110071s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-427666
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-427666 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-427666 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.655840458s)
helpers_test.go:175: Cleaning up "missing-upgrade-427666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-427666
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-427666: (2.016620631s)
--- PASS: TestMissingContainerUpgrade (167.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-615063 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-615063 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (84.705448ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-615063] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-615063 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-615063 --driver=docker  --container-runtime=containerd: (38.464221605s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-615063 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-615063 --no-kubernetes --driver=docker  --container-runtime=containerd
E1008 18:38:32.188038  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-615063 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.753022626s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-615063 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-615063 status -o json: exit status 2 (298.275096ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-615063","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-615063
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-615063: (1.945925009s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-615063 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-615063 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.347031193s)
--- PASS: TestNoKubernetes/serial/Start (5.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-615063 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-615063 "sudo systemctl is-active --quiet service kubelet": exit status 1 (261.065279ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-615063
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-615063: (1.21223297s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-615063 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-615063 --driver=docker  --container-runtime=containerd: (6.50121248s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-615063 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-615063 "sudo systemctl is-active --quiet service kubelet": exit status 1 (314.528055ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (104.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3367796909 start -p stopped-upgrade-880907 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3367796909 start -p stopped-upgrade-880907 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.661349213s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3367796909 -p stopped-upgrade-880907 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3367796909 -p stopped-upgrade-880907 stop: (19.92621178s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-880907 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-880907 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.694930072s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (104.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-880907
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-880907: (1.26130625s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                    
x
+
TestPause/serial/Start (90.15s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-213853 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1008 18:44:05.336783  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-213853 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m30.149104478s)
--- PASS: TestPause/serial/Start (90.15s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.86s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-213853 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-213853 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.831809565s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-258718 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-258718 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (258.722944ms)

                                                
                                                
-- stdout --
	* [false-258718] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:45:15.946626  477632 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:45:15.946856  477632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:45:15.946886  477632 out.go:358] Setting ErrFile to fd 2...
	I1008 18:45:15.946908  477632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:45:15.947249  477632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-283126/.minikube/bin
	I1008 18:45:15.947762  477632 out.go:352] Setting JSON to false
	I1008 18:45:15.949851  477632 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8864,"bootTime":1728404252,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1008 18:45:15.949967  477632 start.go:139] virtualization:  
	I1008 18:45:15.953426  477632 out.go:177] * [false-258718] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1008 18:45:15.956120  477632 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:45:15.956146  477632 notify.go:220] Checking for updates...
	I1008 18:45:15.960041  477632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:45:15.962795  477632 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-283126/kubeconfig
	I1008 18:45:15.965435  477632 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-283126/.minikube
	I1008 18:45:15.967890  477632 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 18:45:15.970754  477632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 18:45:15.974010  477632 config.go:182] Loaded profile config "pause-213853": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1008 18:45:15.974125  477632 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:45:15.998659  477632 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1008 18:45:15.998794  477632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 18:45:16.099942  477632 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-08 18:45:16.08851319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1008 18:45:16.100189  477632 docker.go:318] overlay module found
	I1008 18:45:16.104745  477632 out.go:177] * Using the docker driver based on user configuration
	I1008 18:45:16.107643  477632 start.go:297] selected driver: docker
	I1008 18:45:16.107664  477632 start.go:901] validating driver "docker" against <nil>
	I1008 18:45:16.107678  477632 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:45:16.110784  477632 out.go:201] 
	W1008 18:45:16.113264  477632 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1008 18:45:16.115827  477632 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-258718 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-258718

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-258718

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-258718

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-258718

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-258718

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-258718

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-258718

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-258718

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-258718

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-258718

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-258718

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-258718" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-258718" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19774-283126/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 08 Oct 2024 18:45:18 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-213853
contexts:
- context:
cluster: pause-213853
extensions:
- extension:
last-update: Tue, 08 Oct 2024 18:45:18 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-213853
name: pause-213853
current-context: pause-213853
kind: Config
preferences: {}
users:
- name: pause-213853
user:
client-certificate: /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/pause-213853/client.crt
client-key: /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/pause-213853/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-258718

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258718"

                                                
                                                
----------------------- debugLogs end: false-258718 [took: 4.227649563s] --------------------------------
helpers_test.go:175: Cleaning up "false-258718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-258718
--- PASS: TestNetworkPlugins/group/false (4.69s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-213853 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-213853 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-213853 --output=json --layout=cluster: exit status 2 (380.183949ms)

                                                
                                                
-- stdout --
	{"Name":"pause-213853","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-213853","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-213853 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.82s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.2s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-213853 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-213853 --alsologtostderr -v=5: (1.204179289s)
--- PASS: TestPause/serial/PauseAgain (1.20s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.95s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-213853 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-213853 --alsologtostderr -v=5: (2.949406426s)
--- PASS: TestPause/serial/DeletePaused (2.95s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-213853
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-213853: exit status 1 (18.769954ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-213853: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (176.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-265388 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1008 18:47:08.402869  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:48:32.183846  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:49:05.337453  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-265388 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m56.967827074s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (176.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (60.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-351833 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-351833 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m0.341248124s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (60.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-265388 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b2bbd44b-4602-4547-997e-50017c3845e7] Pending
helpers_test.go:344: "busybox" [b2bbd44b-4602-4547-997e-50017c3845e7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b2bbd44b-4602-4547-997e-50017c3845e7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003973391s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-265388 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-265388 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-265388 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.746585898s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-265388 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-265388 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-265388 --alsologtostderr -v=3: (12.535759576s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-265388 -n old-k8s-version-265388
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-265388 -n old-k8s-version-265388: exit status 7 (86.659137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-265388 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-351833 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e04097eb-5083-4b68-aac7-2ee940cc3f54] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e04097eb-5083-4b68-aac7-2ee940cc3f54] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.017634963s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-351833 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-351833 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-351833 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.802048383s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-351833 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-351833 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-351833 --alsologtostderr -v=3: (12.376063531s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-351833 -n no-preload-351833
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-351833 -n no-preload-351833: exit status 7 (74.589586ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-351833 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (269.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-351833 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1008 18:51:35.254792  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:53:32.184571  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:54:05.337237  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-351833 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m29.200473013s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-351833 -n no-preload-351833
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (269.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-nqgsl" [f47e0296-4fce-46fc-89dc-3fcdc56dcdb8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003253886s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-nqgsl" [f47e0296-4fce-46fc-89dc-3fcdc56dcdb8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003662827s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-351833 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-351833 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-351833 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-351833 -n no-preload-351833
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-351833 -n no-preload-351833: exit status 2 (335.437943ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-351833 -n no-preload-351833
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-351833 -n no-preload-351833: exit status 2 (308.524065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-351833 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-351833 -n no-preload-351833
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-351833 -n no-preload-351833
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (79.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-423092 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-423092 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m19.101242081s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (79.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-w44t2" [54bcc86f-58f0-4ecc-a9cb-4992c093eca7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003460559s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-w44t2" [54bcc86f-58f0-4ecc-a9cb-4992c093eca7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005486131s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-265388 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-265388 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-265388 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-265388 -n old-k8s-version-265388
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-265388 -n old-k8s-version-265388: exit status 2 (308.732257ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-265388 -n old-k8s-version-265388
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-265388 -n old-k8s-version-265388: exit status 2 (327.260632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-265388 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-265388 -n old-k8s-version-265388
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-265388 -n old-k8s-version-265388
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-227109 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-227109 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m28.939867985s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-423092 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4b5b7446-2cd8-4c32-bd30-abbb18acf83f] Pending
helpers_test.go:344: "busybox" [4b5b7446-2cd8-4c32-bd30-abbb18acf83f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4b5b7446-2cd8-4c32-bd30-abbb18acf83f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00582836s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-423092 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-423092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-423092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.195376229s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-423092 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-423092 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-423092 --alsologtostderr -v=3: (12.556168442s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-423092 -n embed-certs-423092
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-423092 -n embed-certs-423092: exit status 7 (132.373371ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-423092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-423092 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-423092 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m26.196449516s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-423092 -n embed-certs-423092
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-227109 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a879e806-61c5-496d-aa34-e675ba3d5a87] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a879e806-61c5-496d-aa34-e675ba3d5a87] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004075548s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-227109 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-227109 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-227109 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.011515657s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-227109 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-227109 --alsologtostderr -v=3
E1008 18:58:32.184303  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-227109 --alsologtostderr -v=3: (12.117906845s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-227109 -n default-k8s-diff-port-227109
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-227109 -n default-k8s-diff-port-227109: exit status 7 (88.811374ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-227109 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-227109 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1008 18:59:05.337508  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:59:41.202843  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:59:41.209287  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:59:41.220703  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:59:41.242162  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:59:41.283552  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:59:41.364976  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:59:41.526701  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:59:41.848390  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:59:42.489897  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:59:43.771444  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:59:46.332834  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:59:51.454152  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:00:01.696029  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:00:20.034605  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/no-preload-351833/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:00:20.041401  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/no-preload-351833/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:00:20.052712  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/no-preload-351833/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:00:20.074129  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/no-preload-351833/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:00:20.115605  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/no-preload-351833/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:00:20.197073  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/no-preload-351833/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:00:20.358921  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/no-preload-351833/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:00:20.680543  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/no-preload-351833/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:00:21.322555  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/no-preload-351833/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:00:22.178246  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:00:22.604178  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/no-preload-351833/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:00:25.166974  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/no-preload-351833/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:00:30.288949  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/no-preload-351833/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:00:40.531263  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/no-preload-351833/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:01:01.013277  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/no-preload-351833/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:01:03.139647  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-227109 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (5m3.42912444s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-227109 -n default-k8s-diff-port-227109
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-78gbl" [185f6dc6-1e5f-44c8-a523-da96f7a7fa96] Running
E1008 19:01:41.974581  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/no-preload-351833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003644901s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-78gbl" [185f6dc6-1e5f-44c8-a523-da96f7a7fa96] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003528913s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-423092 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-423092 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-423092 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-423092 -n embed-certs-423092
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-423092 -n embed-certs-423092: exit status 2 (326.987482ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-423092 -n embed-certs-423092
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-423092 -n embed-certs-423092: exit status 2 (322.659203ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-423092 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-423092 -n embed-certs-423092
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-423092 -n embed-certs-423092
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-898531 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1008 19:02:25.061820  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-898531 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (36.492121366s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-898531 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-898531 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.028283392s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-898531 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-898531 --alsologtostderr -v=3: (1.24533328s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-898531 -n newest-cni-898531
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-898531 -n newest-cni-898531: exit status 7 (79.549374ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-898531 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-898531 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-898531 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (15.212169763s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-898531 -n newest-cni-898531
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-898531 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-898531 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-898531 -n newest-cni-898531
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-898531 -n newest-cni-898531: exit status 2 (320.508988ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-898531 -n newest-cni-898531
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-898531 -n newest-cni-898531: exit status 2 (336.120393ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-898531 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-898531 -n newest-cni-898531
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-898531 -n newest-cni-898531
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (54.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-258718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1008 19:03:03.895922  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/no-preload-351833/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:03:32.184482  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-258718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (54.511486971s)
--- PASS: TestNetworkPlugins/group/auto/Start (54.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5fmk2" [d46a7cf2-344d-4faa-8f6e-9c4cb6d61b5a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004655386s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5fmk2" [d46a7cf2-344d-4faa-8f6e-9c4cb6d61b5a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004674157s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-227109 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-227109 image list --format=json
E1008 19:03:48.405023  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/addons-246349/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-227109 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-227109 -n default-k8s-diff-port-227109
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-227109 -n default-k8s-diff-port-227109: exit status 2 (307.110183ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-227109 -n default-k8s-diff-port-227109
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-227109 -n default-k8s-diff-port-227109: exit status 2 (323.694385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-227109 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-227109 -n default-k8s-diff-port-227109
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-227109 -n default-k8s-diff-port-227109
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-258718 "pgrep -a kubelet"
I1008 19:03:53.415560  288541 config.go:182] Loaded profile config "auto-258718": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-258718 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xczn5" [23d692d3-0bcb-4fff-bbeb-c6701cc34f55] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xczn5" [23d692d3-0bcb-4fff-bbeb-c6701cc34f55] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00435921s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (91.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-258718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-258718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m31.375229378s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (91.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-258718 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-258718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-258718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (57.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-258718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1008 19:04:41.202674  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:05:08.903837  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/old-k8s-version-265388/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:05:20.034100  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/no-preload-351833/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-258718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (57.816233526s)
--- PASS: TestNetworkPlugins/group/calico/Start (57.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8lw6g" [bbbda936-ac93-48f7-b53a-07fef99aa365] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004577605s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-fxsh7" [7579b462-2e85-4791-aa36-eb21d0ea6246] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003626907s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-258718 "pgrep -a kubelet"
I1008 19:05:32.107475  288541 config.go:182] Loaded profile config "calico-258718": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-258718 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ptgmj" [d66cdc78-3945-43b5-ae18-e9d5fe66ef80] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ptgmj" [d66cdc78-3945-43b5-ae18-e9d5fe66ef80] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004237791s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-258718 "pgrep -a kubelet"
I1008 19:05:32.784280  288541 config.go:182] Loaded profile config "kindnet-258718": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-258718 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5jn9v" [36dccf73-fc84-4c76-8009-792dd0f28130] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5jn9v" [36dccf73-fc84-4c76-8009-792dd0f28130] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004154486s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-258718 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-258718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-258718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-258718 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-258718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-258718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-258718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-258718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m2.359666347s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-258718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-258718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m20.356546804s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-258718 "pgrep -a kubelet"
I1008 19:07:10.601231  288541 config.go:182] Loaded profile config "custom-flannel-258718": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-258718 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7cwk8" [7f15967d-a500-454c-9c0b-8ddfc8aa6179] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7cwk8" [7f15967d-a500-454c-9c0b-8ddfc8aa6179] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003959679s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-258718 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-258718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-258718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-258718 "pgrep -a kubelet"
I1008 19:07:30.860164  288541 config.go:182] Loaded profile config "enable-default-cni-258718": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-258718 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ljtr8" [eccb4235-acd7-47b9-bfa6-9c8fb0846087] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ljtr8" [eccb4235-acd7-47b9-bfa6-9c8fb0846087] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.007884237s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-258718 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-258718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-258718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-258718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-258718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (56.34900916s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-258718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1008 19:08:09.805618  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/default-k8s-diff-port-227109/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:09.811980  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/default-k8s-diff-port-227109/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:09.823368  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/default-k8s-diff-port-227109/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:09.844800  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/default-k8s-diff-port-227109/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:09.886397  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/default-k8s-diff-port-227109/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:09.968080  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/default-k8s-diff-port-227109/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:10.129806  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/default-k8s-diff-port-227109/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:10.452948  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/default-k8s-diff-port-227109/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:11.094858  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/default-k8s-diff-port-227109/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:12.377138  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/default-k8s-diff-port-227109/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:14.939418  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/default-k8s-diff-port-227109/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:15.257094  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:20.060857  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/default-k8s-diff-port-227109/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:30.302501  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/default-k8s-diff-port-227109/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:32.183669  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/functional-138958/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-258718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m13.386804908s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-jr5d9" [9e37ef9b-4855-4031-90dc-106bfcc27e7f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003871918s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-258718 "pgrep -a kubelet"
I1008 19:08:46.043773  288541 config.go:182] Loaded profile config "flannel-258718": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-258718 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-s7lwp" [fecf0c4e-b5aa-41ae-91f1-97dfd4f1d556] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1008 19:08:50.783838  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/default-k8s-diff-port-227109/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-s7lwp" [fecf0c4e-b5aa-41ae-91f1-97dfd4f1d556] Running
E1008 19:08:53.746160  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/auto-258718/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:53.752697  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/auto-258718/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:53.764152  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/auto-258718/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:53.785522  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/auto-258718/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:53.826979  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/auto-258718/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:53.908394  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/auto-258718/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:54.069893  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/auto-258718/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:54.391819  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/auto-258718/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:55.033740  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/auto-258718/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:56.315785  288541 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/auto-258718/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003553437s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-258718 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-258718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-258718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-258718 "pgrep -a kubelet"
I1008 19:09:20.401044  288541 config.go:182] Loaded profile config "bridge-258718": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-258718 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6gsvs" [4d43ef7d-d1ed-40d3-9f70-2a28b9758eed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6gsvs" [4d43ef7d-d1ed-40d3-9f70-2a28b9758eed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004749571s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-258718 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-258718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-258718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-419107 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-419107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-419107
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-300799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-300799
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-258718 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-258718

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-258718

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-258718

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-258718

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-258718

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-258718

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-258718

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-258718

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-258718

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-258718

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-258718

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-258718" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-258718" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19774-283126/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 08 Oct 2024 18:44:19 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-213853
contexts:
- context:
cluster: pause-213853
extensions:
- extension:
last-update: Tue, 08 Oct 2024 18:44:19 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-213853
name: pause-213853
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-213853
user:
client-certificate: /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/pause-213853/client.crt
client-key: /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/pause-213853/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-258718

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258718"

                                                
                                                
----------------------- debugLogs end: kubenet-258718 [took: 4.194006774s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-258718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-258718
--- SKIP: TestNetworkPlugins/group/kubenet (4.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-258718 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-258718

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-258718

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-258718

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-258718

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-258718

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-258718

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-258718

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-258718

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-258718

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-258718

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-258718

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-258718" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-258718

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-258718

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-258718

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-258718

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-258718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-258718" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19774-283126/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 08 Oct 2024 18:45:18 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-213853
contexts:
- context:
cluster: pause-213853
extensions:
- extension:
last-update: Tue, 08 Oct 2024 18:45:18 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-213853
name: pause-213853
current-context: pause-213853
kind: Config
preferences: {}
users:
- name: pause-213853
user:
client-certificate: /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/pause-213853/client.crt
client-key: /home/jenkins/minikube-integration/19774-283126/.minikube/profiles/pause-213853/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-258718

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-258718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258718"

                                                
                                                
----------------------- debugLogs end: cilium-258718 [took: 5.187860445s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-258718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-258718
--- SKIP: TestNetworkPlugins/group/cilium (5.64s)

                                                
                                    
Copied to clipboard