Test Report: Docker_Linux_containerd_arm64 19679

                    
                      7cae0481c1ae024841826a3639f158d099448b48:2024-09-20:36298
                    
                

Test fail (2/327)

Order failed test Duration
29 TestAddons/serial/Volcano 199.98
301 TestStartStop/group/old-k8s-version/serial/SecondStart 378.31
x
+
TestAddons/serial/Volcano (199.98s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:843: volcano-admission stabilized in 51.729647ms
addons_test.go:851: volcano-controller stabilized in 52.020932ms
addons_test.go:835: volcano-scheduler stabilized in 52.062065ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-96kkn" [a15b2c2a-f69c-4480-ae34-29e6e950c048] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003841707s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-868xm" [134ae08b-5e12-4efe-8ca6-fc79c119de26] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003998993s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-qlxrw" [8ed6d5c5-5eda-4e76-be3f-2e16950c3e2e] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005463452s
addons_test.go:870: (dbg) Run:  kubectl --context addons-610387 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-610387 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-610387 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [fffabc31-2fe1-468e-bcf0-b9c93ab8df44] Pending
helpers_test.go:344: "test-job-nginx-0" [fffabc31-2fe1-468e-bcf0-b9c93ab8df44] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:902: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:902: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-610387 -n addons-610387
addons_test.go:902: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-20 18:16:27.956683587 +0000 UTC m=+428.247193502
addons_test.go:902: (dbg) Run:  kubectl --context addons-610387 describe po test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-610387 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-82d32205-d5c8-4600-a5e4-be66ff5cb6ec
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vfbfm (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-vfbfm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:902: (dbg) Run:  kubectl --context addons-610387 logs test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-610387 logs test-job-nginx-0 -n my-volcano:
addons_test.go:903: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-610387
helpers_test.go:235: (dbg) docker inspect addons-610387:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8cd52c23411ea5360f51afdf4052ad03fe274c8583970d273c445733d522feb0",
	        "Created": "2024-09-20T18:09:58.624905193Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 448023,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T18:09:58.77593474Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/8cd52c23411ea5360f51afdf4052ad03fe274c8583970d273c445733d522feb0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8cd52c23411ea5360f51afdf4052ad03fe274c8583970d273c445733d522feb0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8cd52c23411ea5360f51afdf4052ad03fe274c8583970d273c445733d522feb0/hosts",
	        "LogPath": "/var/lib/docker/containers/8cd52c23411ea5360f51afdf4052ad03fe274c8583970d273c445733d522feb0/8cd52c23411ea5360f51afdf4052ad03fe274c8583970d273c445733d522feb0-json.log",
	        "Name": "/addons-610387",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-610387:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-610387",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/00069f45fd18e0c2a52979dbb6ca10948bb02a49ed5362dc543a8f67e1cfa647-init/diff:/var/lib/docker/overlay2/3aa0f15c41477a99e99dc1a77b5fdd60c51e1433d51cff06d0a41fe51ac2c7c3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/00069f45fd18e0c2a52979dbb6ca10948bb02a49ed5362dc543a8f67e1cfa647/merged",
	                "UpperDir": "/var/lib/docker/overlay2/00069f45fd18e0c2a52979dbb6ca10948bb02a49ed5362dc543a8f67e1cfa647/diff",
	                "WorkDir": "/var/lib/docker/overlay2/00069f45fd18e0c2a52979dbb6ca10948bb02a49ed5362dc543a8f67e1cfa647/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-610387",
	                "Source": "/var/lib/docker/volumes/addons-610387/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-610387",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-610387",
	                "name.minikube.sigs.k8s.io": "addons-610387",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a9cb12331ad463d3b6f003b6571af79e24f18f1a8e5ee7152f89defc921b6588",
	            "SandboxKey": "/var/run/docker/netns/a9cb12331ad4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-610387": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "7df49a32ab925850a55983b85c85bdd96c3b592231f4d48c86df7ebab54f1e68",
	                    "EndpointID": "e30388fc0e983dfe89c7c2c5987a3d0c6c79b2c3030e7e975d2211fb19b624ea",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-610387",
	                        "8cd52c23411e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-610387 -n addons-610387
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-610387 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-610387 logs -n 25: (1.684347185s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-123612   | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC |                     |
	|         | -p download-only-123612              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| delete  | -p download-only-123612              | download-only-123612   | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| start   | -o=json --download-only              | download-only-342253   | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC |                     |
	|         | -p download-only-342253              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| delete  | -p download-only-342253              | download-only-342253   | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| delete  | -p download-only-123612              | download-only-123612   | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| delete  | -p download-only-342253              | download-only-342253   | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| start   | --download-only -p                   | download-docker-026476 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC |                     |
	|         | download-docker-026476               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-026476            | download-docker-026476 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| start   | --download-only -p                   | binary-mirror-913794   | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC |                     |
	|         | binary-mirror-913794                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43405               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-913794              | binary-mirror-913794   | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| addons  | enable dashboard -p                  | addons-610387          | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC |                     |
	|         | addons-610387                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-610387          | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC |                     |
	|         | addons-610387                        |                        |         |         |                     |                     |
	| start   | -p addons-610387 --wait=true         | addons-610387          | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:13 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:09:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:09:34.530216  447541 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:09:34.530356  447541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:09:34.530365  447541 out.go:358] Setting ErrFile to fd 2...
	I0920 18:09:34.530370  447541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:09:34.530615  447541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-440039/.minikube/bin
	I0920 18:09:34.531075  447541 out.go:352] Setting JSON to false
	I0920 18:09:34.531931  447541 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6726,"bootTime":1726849049,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 18:09:34.532002  447541 start.go:139] virtualization:  
	I0920 18:09:34.535080  447541 out.go:177] * [addons-610387] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 18:09:34.538111  447541 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:09:34.538218  447541 notify.go:220] Checking for updates...
	I0920 18:09:34.543616  447541 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:09:34.545971  447541 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-440039/kubeconfig
	I0920 18:09:34.548183  447541 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-440039/.minikube
	I0920 18:09:34.550708  447541 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 18:09:34.552897  447541 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:09:34.555733  447541 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:09:34.577065  447541 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:09:34.577186  447541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:09:34.627086  447541 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 18:09:34.617887128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:09:34.627191  447541 docker.go:318] overlay module found
	I0920 18:09:34.631169  447541 out.go:177] * Using the docker driver based on user configuration
	I0920 18:09:34.633327  447541 start.go:297] selected driver: docker
	I0920 18:09:34.633354  447541 start.go:901] validating driver "docker" against <nil>
	I0920 18:09:34.633368  447541 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:09:34.634100  447541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:09:34.689106  447541 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 18:09:34.679355563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:09:34.689328  447541 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:09:34.689563  447541 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:09:34.691608  447541 out.go:177] * Using Docker driver with root privileges
	I0920 18:09:34.693776  447541 cni.go:84] Creating CNI manager for ""
	I0920 18:09:34.693835  447541 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 18:09:34.693854  447541 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:09:34.693951  447541 start.go:340] cluster config:
	{Name:addons-610387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-610387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:09:34.697468  447541 out.go:177] * Starting "addons-610387" primary control-plane node in "addons-610387" cluster
	I0920 18:09:34.699641  447541 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0920 18:09:34.701726  447541 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 18:09:34.703610  447541 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 18:09:34.703671  447541 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-440039/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0920 18:09:34.703685  447541 cache.go:56] Caching tarball of preloaded images
	I0920 18:09:34.703687  447541 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 18:09:34.703766  447541 preload.go:172] Found /home/jenkins/minikube-integration/19679-440039/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 18:09:34.703776  447541 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0920 18:09:34.704138  447541 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/config.json ...
	I0920 18:09:34.704157  447541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/config.json: {Name:mka31cf2716c7f32883de2b6074ca9c1f003f1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:09:34.718752  447541 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 18:09:34.718868  447541 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 18:09:34.718893  447541 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 18:09:34.718899  447541 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 18:09:34.718907  447541 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 18:09:34.718929  447541 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 18:09:52.156626  447541 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 18:09:52.156667  447541 cache.go:194] Successfully downloaded all kic artifacts
	I0920 18:09:52.156698  447541 start.go:360] acquireMachinesLock for addons-610387: {Name:mk17082d9dda69840f45a65145e25b5a5832a5c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:09:52.157618  447541 start.go:364] duration metric: took 887.123µs to acquireMachinesLock for "addons-610387"
	I0920 18:09:52.157667  447541 start.go:93] Provisioning new machine with config: &{Name:addons-610387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-610387 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0920 18:09:52.157764  447541 start.go:125] createHost starting for "" (driver="docker")
	I0920 18:09:52.159979  447541 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 18:09:52.160253  447541 start.go:159] libmachine.API.Create for "addons-610387" (driver="docker")
	I0920 18:09:52.160288  447541 client.go:168] LocalClient.Create starting
	I0920 18:09:52.160406  447541 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca.pem
	I0920 18:09:52.407608  447541 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/cert.pem
	I0920 18:09:52.762690  447541 cli_runner.go:164] Run: docker network inspect addons-610387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 18:09:52.780693  447541 cli_runner.go:211] docker network inspect addons-610387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 18:09:52.780805  447541 network_create.go:284] running [docker network inspect addons-610387] to gather additional debugging logs...
	I0920 18:09:52.780836  447541 cli_runner.go:164] Run: docker network inspect addons-610387
	W0920 18:09:52.795365  447541 cli_runner.go:211] docker network inspect addons-610387 returned with exit code 1
	I0920 18:09:52.795410  447541 network_create.go:287] error running [docker network inspect addons-610387]: docker network inspect addons-610387: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-610387 not found
	I0920 18:09:52.795426  447541 network_create.go:289] output of [docker network inspect addons-610387]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-610387 not found
	
	** /stderr **
	I0920 18:09:52.795537  447541 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 18:09:52.811239  447541 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017b5360}
	I0920 18:09:52.811284  447541 network_create.go:124] attempt to create docker network addons-610387 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 18:09:52.811340  447541 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-610387 addons-610387
	I0920 18:09:52.882272  447541 network_create.go:108] docker network addons-610387 192.168.49.0/24 created
	I0920 18:09:52.882331  447541 kic.go:121] calculated static IP "192.168.49.2" for the "addons-610387" container
	I0920 18:09:52.882405  447541 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 18:09:52.896553  447541 cli_runner.go:164] Run: docker volume create addons-610387 --label name.minikube.sigs.k8s.io=addons-610387 --label created_by.minikube.sigs.k8s.io=true
	I0920 18:09:52.912111  447541 oci.go:103] Successfully created a docker volume addons-610387
	I0920 18:09:52.912213  447541 cli_runner.go:164] Run: docker run --rm --name addons-610387-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-610387 --entrypoint /usr/bin/test -v addons-610387:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0920 18:09:54.517204  447541 cli_runner.go:217] Completed: docker run --rm --name addons-610387-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-610387 --entrypoint /usr/bin/test -v addons-610387:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (1.604943673s)
	I0920 18:09:54.517238  447541 oci.go:107] Successfully prepared a docker volume addons-610387
	I0920 18:09:54.517266  447541 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 18:09:54.517286  447541 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 18:09:54.517354  447541 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19679-440039/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-610387:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 18:09:58.556424  447541 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19679-440039/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-610387:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.039020327s)
	I0920 18:09:58.556454  447541 kic.go:203] duration metric: took 4.039164698s to extract preloaded images to volume ...
	W0920 18:09:58.556589  447541 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 18:09:58.556703  447541 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 18:09:58.609790  447541 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-610387 --name addons-610387 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-610387 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-610387 --network addons-610387 --ip 192.168.49.2 --volume addons-610387:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0920 18:09:58.933985  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Running}}
	I0920 18:09:58.962455  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:09:58.985148  447541 cli_runner.go:164] Run: docker exec addons-610387 stat /var/lib/dpkg/alternatives/iptables
	I0920 18:09:59.059876  447541 oci.go:144] the created container "addons-610387" has a running status.
	I0920 18:09:59.059909  447541 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa...
	I0920 18:10:00.036628  447541 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 18:10:00.083603  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:00.141719  447541 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 18:10:00.141742  447541 kic_runner.go:114] Args: [docker exec --privileged addons-610387 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 18:10:00.338660  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:00.457351  447541 machine.go:93] provisionDockerMachine start ...
	I0920 18:10:00.457518  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:00.555194  447541 main.go:141] libmachine: Using SSH client type: native
	I0920 18:10:00.555486  447541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:10:00.555497  447541 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:10:00.734466  447541 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-610387
	
	I0920 18:10:00.734498  447541 ubuntu.go:169] provisioning hostname "addons-610387"
	I0920 18:10:00.734572  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:00.754226  447541 main.go:141] libmachine: Using SSH client type: native
	I0920 18:10:00.754626  447541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:10:00.754666  447541 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-610387 && echo "addons-610387" | sudo tee /etc/hostname
	I0920 18:10:00.928095  447541 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-610387
	
	I0920 18:10:00.928278  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:00.946938  447541 main.go:141] libmachine: Using SSH client type: native
	I0920 18:10:00.947186  447541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:10:00.947210  447541 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-610387' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-610387/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-610387' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:10:01.090793  447541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:10:01.090821  447541 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19679-440039/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-440039/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-440039/.minikube}
	I0920 18:10:01.090856  447541 ubuntu.go:177] setting up certificates
	I0920 18:10:01.090866  447541 provision.go:84] configureAuth start
	I0920 18:10:01.090946  447541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-610387
	I0920 18:10:01.108640  447541 provision.go:143] copyHostCerts
	I0920 18:10:01.108739  447541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-440039/.minikube/ca.pem (1082 bytes)
	I0920 18:10:01.108861  447541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-440039/.minikube/cert.pem (1123 bytes)
	I0920 18:10:01.108921  447541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-440039/.minikube/key.pem (1675 bytes)
	I0920 18:10:01.108970  447541 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-440039/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca-key.pem org=jenkins.addons-610387 san=[127.0.0.1 192.168.49.2 addons-610387 localhost minikube]
	I0920 18:10:01.675303  447541 provision.go:177] copyRemoteCerts
	I0920 18:10:01.675380  447541 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:10:01.675426  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:01.693927  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:01.800325  447541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:10:01.828686  447541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:10:01.854664  447541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 18:10:01.879626  447541 provision.go:87] duration metric: took 788.746214ms to configureAuth
	I0920 18:10:01.879660  447541 ubuntu.go:193] setting minikube options for container-runtime
	I0920 18:10:01.879869  447541 config.go:182] Loaded profile config "addons-610387": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 18:10:01.879882  447541 machine.go:96] duration metric: took 1.422507715s to provisionDockerMachine
	I0920 18:10:01.879890  447541 client.go:171] duration metric: took 9.719591812s to LocalClient.Create
	I0920 18:10:01.879911  447541 start.go:167] duration metric: took 9.71966058s to libmachine.API.Create "addons-610387"
	I0920 18:10:01.879923  447541 start.go:293] postStartSetup for "addons-610387" (driver="docker")
	I0920 18:10:01.879934  447541 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:10:01.879991  447541 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:10:01.880040  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:01.897583  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:02.000651  447541 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:10:02.004404  447541 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 18:10:02.004448  447541 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 18:10:02.004466  447541 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 18:10:02.004475  447541 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 18:10:02.004487  447541 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-440039/.minikube/addons for local assets ...
	I0920 18:10:02.004560  447541 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-440039/.minikube/files for local assets ...
	I0920 18:10:02.004598  447541 start.go:296] duration metric: took 124.668206ms for postStartSetup
	I0920 18:10:02.005017  447541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-610387
	I0920 18:10:02.022938  447541 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/config.json ...
	I0920 18:10:02.023264  447541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:10:02.023319  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:02.054016  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:02.151323  447541 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 18:10:02.156377  447541 start.go:128] duration metric: took 9.998590645s to createHost
	I0920 18:10:02.156404  447541 start.go:83] releasing machines lock for "addons-610387", held for 9.99876197s
	I0920 18:10:02.156477  447541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-610387
	I0920 18:10:02.174386  447541 ssh_runner.go:195] Run: cat /version.json
	I0920 18:10:02.174437  447541 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:10:02.174508  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:02.174442  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:02.192823  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:02.194307  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:02.416352  447541 ssh_runner.go:195] Run: systemctl --version
	I0920 18:10:02.421365  447541 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 18:10:02.426238  447541 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 18:10:02.454652  447541 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 18:10:02.454765  447541 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:10:02.484721  447541 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 18:10:02.484757  447541 start.go:495] detecting cgroup driver to use...
	I0920 18:10:02.484793  447541 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 18:10:02.484872  447541 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 18:10:02.498211  447541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 18:10:02.510354  447541 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:10:02.510430  447541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:10:02.525260  447541 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:10:02.540236  447541 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:10:02.634149  447541 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:10:02.722122  447541 docker.go:233] disabling docker service ...
	I0920 18:10:02.722241  447541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:10:02.743869  447541 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:10:02.756583  447541 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:10:02.844258  447541 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:10:02.939605  447541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:10:02.952402  447541 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:10:02.971496  447541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 18:10:02.981703  447541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 18:10:02.992054  447541 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 18:10:02.992162  447541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 18:10:03.002726  447541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 18:10:03.013409  447541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 18:10:03.023705  447541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 18:10:03.035468  447541 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:10:03.047029  447541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 18:10:03.058403  447541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 18:10:03.069663  447541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 18:10:03.081163  447541 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:10:03.090713  447541 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:10:03.100026  447541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:10:03.188367  447541 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 18:10:03.323494  447541 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0920 18:10:03.323594  447541 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0920 18:10:03.327556  447541 start.go:563] Will wait 60s for crictl version
	I0920 18:10:03.327621  447541 ssh_runner.go:195] Run: which crictl
	I0920 18:10:03.331135  447541 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:10:03.368726  447541 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0920 18:10:03.368826  447541 ssh_runner.go:195] Run: containerd --version
	I0920 18:10:03.390626  447541 ssh_runner.go:195] Run: containerd --version
	I0920 18:10:03.415629  447541 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0920 18:10:03.418959  447541 cli_runner.go:164] Run: docker network inspect addons-610387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 18:10:03.433991  447541 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 18:10:03.437763  447541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:10:03.448648  447541 kubeadm.go:883] updating cluster {Name:addons-610387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-610387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:10:03.448786  447541 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 18:10:03.448861  447541 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:10:03.486576  447541 containerd.go:627] all images are preloaded for containerd runtime.
	I0920 18:10:03.486603  447541 containerd.go:534] Images already preloaded, skipping extraction
	I0920 18:10:03.486671  447541 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:10:03.524606  447541 containerd.go:627] all images are preloaded for containerd runtime.
	I0920 18:10:03.524631  447541 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:10:03.524640  447541 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0920 18:10:03.524802  447541 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-610387 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-610387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:10:03.524877  447541 ssh_runner.go:195] Run: sudo crictl info
	I0920 18:10:03.567408  447541 cni.go:84] Creating CNI manager for ""
	I0920 18:10:03.567436  447541 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 18:10:03.567446  447541 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:10:03.567469  447541 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-610387 NodeName:addons-610387 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:10:03.567615  447541 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-610387"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:10:03.567694  447541 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:10:03.577788  447541 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:10:03.577892  447541 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:10:03.587013  447541 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0920 18:10:03.605856  447541 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:10:03.624710  447541 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0920 18:10:03.643563  447541 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 18:10:03.647102  447541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:10:03.658113  447541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:10:03.748928  447541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:10:03.765423  447541 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387 for IP: 192.168.49.2
	I0920 18:10:03.765448  447541 certs.go:194] generating shared ca certs ...
	I0920 18:10:03.765465  447541 certs.go:226] acquiring lock for ca certs: {Name:mk3d7fcf9ade00248d7372a8cec4403eeffc64da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:10:03.765664  447541 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-440039/.minikube/ca.key
	I0920 18:10:04.333334  447541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-440039/.minikube/ca.crt ...
	I0920 18:10:04.333370  447541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/ca.crt: {Name:mke698288ffa68fe7deebfe47ed0158a91297dd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:10:04.333604  447541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-440039/.minikube/ca.key ...
	I0920 18:10:04.333620  447541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/ca.key: {Name:mkbf46dfdc1776848bc6dd979fb1784d72b6590b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:10:04.333711  447541 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-440039/.minikube/proxy-client-ca.key
	I0920 18:10:05.124484  447541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-440039/.minikube/proxy-client-ca.crt ...
	I0920 18:10:05.124517  447541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/proxy-client-ca.crt: {Name:mk77efe1c50e007ead856bddcceba0cf7c9167f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:10:05.124723  447541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-440039/.minikube/proxy-client-ca.key ...
	I0920 18:10:05.124736  447541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/proxy-client-ca.key: {Name:mk80abe1124ab7744b0effdeb7e78921b21bc849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:10:05.124821  447541 certs.go:256] generating profile certs ...
	I0920 18:10:05.124884  447541 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.key
	I0920 18:10:05.124915  447541 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt with IP's: []
	I0920 18:10:05.606547  447541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt ...
	I0920 18:10:05.606580  447541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: {Name:mk0d37f6968da7ec181902b093ebd4fd3538764e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:10:05.606776  447541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.key ...
	I0920 18:10:05.606788  447541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.key: {Name:mk3fc713f4302ad2565e87216a268be86613e3c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:10:05.607458  447541 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/apiserver.key.44a863b8
	I0920 18:10:05.607483  447541 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/apiserver.crt.44a863b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 18:10:05.950483  447541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/apiserver.crt.44a863b8 ...
	I0920 18:10:05.950515  447541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/apiserver.crt.44a863b8: {Name:mk1a9152f56cc9803cdc98db0c1dafac791b23ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:10:05.950705  447541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/apiserver.key.44a863b8 ...
	I0920 18:10:05.950721  447541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/apiserver.key.44a863b8: {Name:mk1a680fb4cf9e9dc92b68e2cf7e930931c98f40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:10:05.950810  447541 certs.go:381] copying /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/apiserver.crt.44a863b8 -> /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/apiserver.crt
	I0920 18:10:05.950890  447541 certs.go:385] copying /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/apiserver.key.44a863b8 -> /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/apiserver.key
	I0920 18:10:05.950944  447541 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/proxy-client.key
	I0920 18:10:05.950964  447541 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/proxy-client.crt with IP's: []
	I0920 18:10:06.311175  447541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/proxy-client.crt ...
	I0920 18:10:06.311208  447541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/proxy-client.crt: {Name:mk459949adb84587c068634adce6f9d12e6f93aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:10:06.312168  447541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/proxy-client.key ...
	I0920 18:10:06.312188  447541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/proxy-client.key: {Name:mk946b9c0a858d2ae308d633b025430c55e4070b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:10:06.312443  447541 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 18:10:06.312488  447541 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:10:06.312518  447541 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:10:06.312547  447541 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/key.pem (1675 bytes)
	I0920 18:10:06.313182  447541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:10:06.338384  447541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 18:10:06.362963  447541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:10:06.387828  447541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 18:10:06.412516  447541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 18:10:06.438876  447541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:10:06.467138  447541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:10:06.492624  447541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:10:06.517364  447541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:10:06.541545  447541 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:10:06.559952  447541 ssh_runner.go:195] Run: openssl version
	I0920 18:10:06.565473  447541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:10:06.575250  447541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:10:06.579075  447541 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:10 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:10:06.579202  447541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:10:06.586464  447541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:10:06.596127  447541 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:10:06.599428  447541 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:10:06.599479  447541 kubeadm.go:392] StartCluster: {Name:addons-610387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-610387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:10:06.599559  447541 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0920 18:10:06.599620  447541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:10:06.638656  447541 cri.go:89] found id: ""
	I0920 18:10:06.638733  447541 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:10:06.647804  447541 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:10:06.657114  447541 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 18:10:06.657184  447541 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:10:06.666284  447541 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:10:06.666377  447541 kubeadm.go:157] found existing configuration files:
	
	I0920 18:10:06.666466  447541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:10:06.675362  447541 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:10:06.675432  447541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:10:06.684897  447541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:10:06.693861  447541 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:10:06.693930  447541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:10:06.702808  447541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:10:06.712202  447541 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:10:06.712288  447541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:10:06.721087  447541 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:10:06.730004  447541 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:10:06.730100  447541 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:10:06.739360  447541 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 18:10:06.781165  447541 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:10:06.781485  447541 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:10:06.805985  447541 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 18:10:06.806061  447541 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0920 18:10:06.806102  447541 kubeadm.go:310] OS: Linux
	I0920 18:10:06.806152  447541 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 18:10:06.806204  447541 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 18:10:06.806255  447541 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 18:10:06.806320  447541 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 18:10:06.806374  447541 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 18:10:06.806432  447541 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 18:10:06.806488  447541 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 18:10:06.806542  447541 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 18:10:06.806591  447541 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 18:10:06.878780  447541 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:10:06.878893  447541 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:10:06.878988  447541 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:10:06.894726  447541 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:10:06.900594  447541 out.go:235]   - Generating certificates and keys ...
	I0920 18:10:06.900802  447541 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:10:06.900910  447541 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:10:07.345678  447541 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:10:07.651004  447541 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:10:08.121733  447541 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:10:08.616831  447541 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:10:09.160248  447541 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:10:09.160786  447541 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-610387 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 18:10:09.831631  447541 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:10:09.831888  447541 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-610387 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 18:10:10.164051  447541 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:10:11.109578  447541 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:10:11.587689  447541 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:10:11.587975  447541 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:10:12.157647  447541 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:10:12.949042  447541 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:10:13.212383  447541 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:10:13.678454  447541 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:10:14.280585  447541 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:10:14.281172  447541 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:10:14.284129  447541 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:10:14.286814  447541 out.go:235]   - Booting up control plane ...
	I0920 18:10:14.286917  447541 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:10:14.286993  447541 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:10:14.289814  447541 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:10:14.314538  447541 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:10:14.321974  447541 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:10:14.322037  447541 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:10:14.424218  447541 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:10:14.424344  447541 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:10:15.425338  447541 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000989956s
	I0920 18:10:15.425429  447541 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:10:21.426397  447541 kubeadm.go:310] [api-check] The API server is healthy after 6.00132081s
	I0920 18:10:21.446270  447541 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:10:21.462222  447541 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:10:21.487737  447541 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:10:21.488252  447541 kubeadm.go:310] [mark-control-plane] Marking the node addons-610387 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:10:21.514336  447541 kubeadm.go:310] [bootstrap-token] Using token: xs9ph4.1vrqx9xttjwboefq
	I0920 18:10:21.516385  447541 out.go:235]   - Configuring RBAC rules ...
	I0920 18:10:21.516535  447541 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:10:21.524856  447541 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:10:21.534476  447541 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:10:21.543144  447541 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:10:21.547321  447541 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:10:21.551493  447541 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:10:21.835324  447541 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:10:22.263659  447541 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:10:22.835228  447541 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:10:22.836346  447541 kubeadm.go:310] 
	I0920 18:10:22.836420  447541 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:10:22.836435  447541 kubeadm.go:310] 
	I0920 18:10:22.836512  447541 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:10:22.836522  447541 kubeadm.go:310] 
	I0920 18:10:22.836548  447541 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:10:22.836606  447541 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:10:22.836670  447541 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:10:22.836680  447541 kubeadm.go:310] 
	I0920 18:10:22.836734  447541 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:10:22.836744  447541 kubeadm.go:310] 
	I0920 18:10:22.836791  447541 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:10:22.836801  447541 kubeadm.go:310] 
	I0920 18:10:22.836852  447541 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:10:22.836930  447541 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:10:22.837000  447541 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:10:22.837009  447541 kubeadm.go:310] 
	I0920 18:10:22.837091  447541 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:10:22.837175  447541 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:10:22.837184  447541 kubeadm.go:310] 
	I0920 18:10:22.837292  447541 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xs9ph4.1vrqx9xttjwboefq \
	I0920 18:10:22.837396  447541 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1b9cd0a9e6a358dbae93b6dfe7b92a90668f1e930161f8ef646a922f0b0b4ec5 \
	I0920 18:10:22.837419  447541 kubeadm.go:310] 	--control-plane 
	I0920 18:10:22.837435  447541 kubeadm.go:310] 
	I0920 18:10:22.837518  447541 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:10:22.837527  447541 kubeadm.go:310] 
	I0920 18:10:22.837608  447541 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xs9ph4.1vrqx9xttjwboefq \
	I0920 18:10:22.837711  447541 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1b9cd0a9e6a358dbae93b6dfe7b92a90668f1e930161f8ef646a922f0b0b4ec5 
	I0920 18:10:22.842031  447541 kubeadm.go:310] W0920 18:10:06.777691    1024 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:10:22.842420  447541 kubeadm.go:310] W0920 18:10:06.778749    1024 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:10:22.842713  447541 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0920 18:10:22.842850  447541 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:10:22.842861  447541 cni.go:84] Creating CNI manager for ""
	I0920 18:10:22.842869  447541 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 18:10:22.845158  447541 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 18:10:22.847183  447541 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 18:10:22.851123  447541 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 18:10:22.851147  447541 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 18:10:22.870691  447541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 18:10:23.162210  447541 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:10:23.162380  447541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:10:23.162430  447541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-610387 minikube.k8s.io/updated_at=2024_09_20T18_10_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=addons-610387 minikube.k8s.io/primary=true
	I0920 18:10:23.343021  447541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:10:23.343142  447541 ops.go:34] apiserver oom_adj: -16
	I0920 18:10:23.843520  447541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:10:24.344072  447541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:10:24.843897  447541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:10:25.343153  447541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:10:25.843551  447541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:10:26.344109  447541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:10:26.843845  447541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:10:27.343262  447541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:10:27.526352  447541 kubeadm.go:1113] duration metric: took 4.364053264s to wait for elevateKubeSystemPrivileges
	I0920 18:10:27.526381  447541 kubeadm.go:394] duration metric: took 20.92690561s to StartCluster
	I0920 18:10:27.526398  447541 settings.go:142] acquiring lock: {Name:mk1135c1a1ce95063626d6fac03fabf56993cb73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:10:27.526522  447541 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-440039/kubeconfig
	I0920 18:10:27.526951  447541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/kubeconfig: {Name:mkc0c275236e567d398d3ba786de8188e8f878bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:10:27.527128  447541 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0920 18:10:27.527308  447541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 18:10:27.527547  447541 config.go:182] Loaded profile config "addons-610387": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 18:10:27.527577  447541 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 18:10:27.527650  447541 addons.go:69] Setting yakd=true in profile "addons-610387"
	I0920 18:10:27.527665  447541 addons.go:234] Setting addon yakd=true in "addons-610387"
	I0920 18:10:27.527689  447541 host.go:66] Checking if "addons-610387" exists ...
	I0920 18:10:27.528187  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:27.528507  447541 addons.go:69] Setting inspektor-gadget=true in profile "addons-610387"
	I0920 18:10:27.528522  447541 addons.go:234] Setting addon inspektor-gadget=true in "addons-610387"
	I0920 18:10:27.528546  447541 host.go:66] Checking if "addons-610387" exists ...
	I0920 18:10:27.528964  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:27.529417  447541 addons.go:69] Setting cloud-spanner=true in profile "addons-610387"
	I0920 18:10:27.529439  447541 addons.go:234] Setting addon cloud-spanner=true in "addons-610387"
	I0920 18:10:27.529465  447541 host.go:66] Checking if "addons-610387" exists ...
	I0920 18:10:27.529869  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:27.533092  447541 addons.go:69] Setting metrics-server=true in profile "addons-610387"
	I0920 18:10:27.533170  447541 addons.go:234] Setting addon metrics-server=true in "addons-610387"
	I0920 18:10:27.533223  447541 host.go:66] Checking if "addons-610387" exists ...
	I0920 18:10:27.533750  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:27.537081  447541 out.go:177] * Verifying Kubernetes components...
	I0920 18:10:27.539342  447541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:10:27.536757  447541 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-610387"
	I0920 18:10:27.540210  447541 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-610387"
	I0920 18:10:27.540256  447541 host.go:66] Checking if "addons-610387" exists ...
	I0920 18:10:27.540771  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:27.536761  447541 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-610387"
	I0920 18:10:27.546783  447541 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-610387"
	I0920 18:10:27.546838  447541 host.go:66] Checking if "addons-610387" exists ...
	I0920 18:10:27.536769  447541 addons.go:69] Setting default-storageclass=true in profile "addons-610387"
	I0920 18:10:27.552602  447541 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-610387"
	I0920 18:10:27.554104  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:27.554166  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:27.536776  447541 addons.go:69] Setting gcp-auth=true in profile "addons-610387"
	I0920 18:10:27.600918  447541 mustload.go:65] Loading cluster: addons-610387
	I0920 18:10:27.601189  447541 config.go:182] Loaded profile config "addons-610387": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 18:10:27.601569  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:27.536778  447541 addons.go:69] Setting storage-provisioner=true in profile "addons-610387"
	I0920 18:10:27.619609  447541 addons.go:234] Setting addon storage-provisioner=true in "addons-610387"
	I0920 18:10:27.619653  447541 host.go:66] Checking if "addons-610387" exists ...
	I0920 18:10:27.620146  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:27.637443  447541 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 18:10:27.640911  447541 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 18:10:27.643238  447541 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 18:10:27.643262  447541 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 18:10:27.643337  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:27.536780  447541 addons.go:69] Setting ingress=true in profile "addons-610387"
	I0920 18:10:27.647211  447541 addons.go:234] Setting addon ingress=true in "addons-610387"
	I0920 18:10:27.647266  447541 host.go:66] Checking if "addons-610387" exists ...
	I0920 18:10:27.536784  447541 addons.go:69] Setting ingress-dns=true in profile "addons-610387"
	I0920 18:10:27.647527  447541 addons.go:234] Setting addon ingress-dns=true in "addons-610387"
	I0920 18:10:27.647548  447541 host.go:66] Checking if "addons-610387" exists ...
	I0920 18:10:27.647943  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:27.681512  447541 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 18:10:27.681535  447541 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 18:10:27.681604  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:27.536782  447541 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-610387"
	I0920 18:10:27.689307  447541 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-610387"
	I0920 18:10:27.689680  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:27.536791  447541 addons.go:69] Setting volcano=true in profile "addons-610387"
	I0920 18:10:27.696273  447541 addons.go:234] Setting addon volcano=true in "addons-610387"
	I0920 18:10:27.696317  447541 host.go:66] Checking if "addons-610387" exists ...
	I0920 18:10:27.696803  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:27.702155  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:27.536795  447541 addons.go:69] Setting volumesnapshots=true in profile "addons-610387"
	I0920 18:10:27.705293  447541 addons.go:234] Setting addon volumesnapshots=true in "addons-610387"
	I0920 18:10:27.705352  447541 host.go:66] Checking if "addons-610387" exists ...
	I0920 18:10:27.705822  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:27.536774  447541 addons.go:69] Setting registry=true in profile "addons-610387"
	I0920 18:10:27.741493  447541 addons.go:234] Setting addon registry=true in "addons-610387"
	I0920 18:10:27.741537  447541 host.go:66] Checking if "addons-610387" exists ...
	I0920 18:10:27.742110  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:27.743378  447541 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 18:10:27.745326  447541 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:10:27.745348  447541 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:10:27.745417  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:27.769499  447541 host.go:66] Checking if "addons-610387" exists ...
	I0920 18:10:27.770372  447541 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 18:10:27.772260  447541 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 18:10:27.775344  447541 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 18:10:27.777684  447541 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 18:10:27.779421  447541 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 18:10:27.785462  447541 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0920 18:10:27.798104  447541 addons.go:234] Setting addon default-storageclass=true in "addons-610387"
	I0920 18:10:27.798144  447541 host.go:66] Checking if "addons-610387" exists ...
	I0920 18:10:27.803909  447541 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 18:10:27.803930  447541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 18:10:27.803997  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:27.836432  447541 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 18:10:27.842500  447541 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 18:10:27.845246  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:27.845306  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:27.845851  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:27.868788  447541 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:10:27.881905  447541 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 18:10:27.883972  447541 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:10:27.883994  447541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 18:10:27.884184  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:27.890444  447541 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 18:10:27.890688  447541 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 18:10:27.893387  447541 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 18:10:27.896788  447541 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:10:27.896818  447541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 18:10:27.896884  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:27.898087  447541 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 18:10:27.898107  447541 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 18:10:27.898193  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:27.902750  447541 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 18:10:27.903582  447541 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:10:27.903605  447541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:10:27.903668  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:27.909478  447541 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 18:10:27.912916  447541 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 18:10:27.912942  447541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 18:10:27.913009  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:27.918840  447541 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 18:10:27.921197  447541 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 18:10:27.923842  447541 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 18:10:27.923922  447541 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 18:10:27.924008  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:27.937482  447541 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 18:10:27.939500  447541 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 18:10:27.939524  447541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 18:10:27.939604  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:27.954492  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:27.955071  447541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 18:10:27.955157  447541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:10:27.957517  447541 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:10:27.961525  447541 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:10:27.966440  447541 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 18:10:27.969386  447541 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:10:27.969460  447541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 18:10:27.972971  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:27.992328  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:28.010459  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:28.047282  447541 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-610387"
	I0920 18:10:28.047330  447541 host.go:66] Checking if "addons-610387" exists ...
	I0920 18:10:28.047760  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:28.058590  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:28.133897  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:28.134425  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:28.141687  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:28.170126  447541 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:10:28.170149  447541 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:10:28.170215  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:28.172936  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:28.180876  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:28.184982  447541 out.go:177]   - Using image docker.io/busybox:stable
	I0920 18:10:28.191663  447541 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 18:10:28.193609  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:28.195111  447541 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:10:28.195131  447541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 18:10:28.195192  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	W0920 18:10:28.205993  447541 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0920 18:10:28.206025  447541 retry.go:31] will retry after 219.492584ms: ssh: handshake failed: EOF
	I0920 18:10:28.223517  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	W0920 18:10:28.224705  447541 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0920 18:10:28.224728  447541 retry.go:31] will retry after 353.117237ms: ssh: handshake failed: EOF
	I0920 18:10:28.226466  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	W0920 18:10:28.232500  447541 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0920 18:10:28.232528  447541 retry.go:31] will retry after 351.869148ms: ssh: handshake failed: EOF
	I0920 18:10:28.623039  447541 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 18:10:28.623063  447541 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 18:10:28.714683  447541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:10:28.742061  447541 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 18:10:28.742137  447541 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 18:10:28.760892  447541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:10:28.760965  447541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 18:10:28.768820  447541 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:10:28.768896  447541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 18:10:28.807679  447541 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 18:10:28.807707  447541 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 18:10:28.827895  447541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:10:28.845406  447541 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 18:10:28.845434  447541 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 18:10:28.884752  447541 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 18:10:28.884781  447541 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 18:10:28.917414  447541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:10:28.933369  447541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 18:10:28.955386  447541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:10:28.955428  447541 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:10:28.968326  447541 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 18:10:28.968354  447541 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 18:10:28.994052  447541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:10:28.995618  447541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 18:10:29.087323  447541 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 18:10:29.087349  447541 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 18:10:29.096737  447541 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 18:10:29.096764  447541 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 18:10:29.132251  447541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:10:29.195637  447541 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 18:10:29.195671  447541 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 18:10:29.196087  447541 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 18:10:29.196106  447541 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 18:10:29.246909  447541 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 18:10:29.246935  447541 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 18:10:29.260509  447541 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 18:10:29.260536  447541 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 18:10:29.275944  447541 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:10:29.275971  447541 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:10:29.401161  447541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:10:29.416601  447541 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 18:10:29.416628  447541 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 18:10:29.442585  447541 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 18:10:29.442613  447541 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 18:10:29.447293  447541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:10:29.489928  447541 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 18:10:29.489954  447541 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 18:10:29.526358  447541 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 18:10:29.526385  447541 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 18:10:29.542080  447541 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 18:10:29.542106  447541 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 18:10:29.625231  447541 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:10:29.625257  447541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 18:10:29.663506  447541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:10:29.677098  447541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:10:29.713666  447541 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 18:10:29.713696  447541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 18:10:29.719020  447541 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 18:10:29.719049  447541 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 18:10:29.747065  447541 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:10:29.747089  447541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 18:10:29.846718  447541 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.891538452s)
	I0920 18:10:29.847554  447541 node_ready.go:35] waiting up to 6m0s for node "addons-610387" to be "Ready" ...
	I0920 18:10:29.847755  447541 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.892665987s)
	I0920 18:10:29.847778  447541 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 18:10:29.853194  447541 node_ready.go:49] node "addons-610387" has status "Ready":"True"
	I0920 18:10:29.853275  447541 node_ready.go:38] duration metric: took 5.686449ms for node "addons-610387" to be "Ready" ...
	I0920 18:10:29.853301  447541 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:10:29.869151  447541 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9smxp" in "kube-system" namespace to be "Ready" ...
	I0920 18:10:29.975210  447541 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 18:10:29.975235  447541 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 18:10:30.020172  447541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:10:30.023864  447541 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 18:10:30.023888  447541 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 18:10:30.351965  447541 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-610387" context rescaled to 1 replicas
	I0920 18:10:30.424070  447541 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:10:30.424095  447541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 18:10:30.431897  447541 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 18:10:30.431922  447541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 18:10:30.647708  447541 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 18:10:30.647739  447541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 18:10:30.678802  447541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:10:30.705741  447541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.990971133s)
	I0920 18:10:30.997897  447541 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:10:30.997991  447541 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 18:10:31.333256  447541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:10:31.895164  447541 pod_ready.go:103] pod "coredns-7c65d6cfc9-9smxp" in "kube-system" namespace has status "Ready":"False"
	I0920 18:10:32.708635  447541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.775228929s)
	I0920 18:10:32.708760  447541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.791268016s)
	I0920 18:10:32.708791  447541 addons.go:475] Verifying addon registry=true in "addons-610387"
	I0920 18:10:32.709629  447541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.881664165s)
	I0920 18:10:32.712269  447541 out.go:177] * Verifying registry addon...
	I0920 18:10:32.715216  447541 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 18:10:32.718837  447541 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 18:10:32.718859  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:33.254818  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:33.732293  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:34.219545  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:34.377523  447541 pod_ready.go:103] pod "coredns-7c65d6cfc9-9smxp" in "kube-system" namespace has status "Ready":"False"
	I0920 18:10:34.754551  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:34.801578  447541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.807486396s)
	I0920 18:10:34.801612  447541 addons.go:475] Verifying addon ingress=true in "addons-610387"
	I0920 18:10:34.803582  447541 out.go:177] * Verifying ingress addon...
	I0920 18:10:34.806265  447541 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 18:10:34.917141  447541 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 18:10:34.917171  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:35.037831  447541 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 18:10:35.037938  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:35.072534  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:35.243904  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:35.342197  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:35.511246  447541 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 18:10:35.622715  447541 addons.go:234] Setting addon gcp-auth=true in "addons-610387"
	I0920 18:10:35.622775  447541 host.go:66] Checking if "addons-610387" exists ...
	I0920 18:10:35.623281  447541 cli_runner.go:164] Run: docker container inspect addons-610387 --format={{.State.Status}}
	I0920 18:10:35.648179  447541 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 18:10:35.648241  447541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610387
	I0920 18:10:35.679051  447541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/addons-610387/id_rsa Username:docker}
	I0920 18:10:35.719448  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:35.821276  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:36.219807  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:36.327877  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:36.393260  447541 pod_ready.go:103] pod "coredns-7c65d6cfc9-9smxp" in "kube-system" namespace has status "Ready":"False"
	I0920 18:10:36.740777  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:36.855926  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:37.258878  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:37.370499  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:37.742096  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:37.787206  447541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.791540836s)
	I0920 18:10:37.787272  447541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.654969244s)
	I0920 18:10:37.787367  447541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.386181051s)
	I0920 18:10:37.787434  447541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.340117642s)
	I0920 18:10:37.787673  447541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.124133135s)
	I0920 18:10:37.787687  447541 addons.go:475] Verifying addon metrics-server=true in "addons-610387"
	I0920 18:10:37.787726  447541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.110602503s)
	I0920 18:10:37.788021  447541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.767817793s)
	W0920 18:10:37.788053  447541 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:10:37.788070  447541 retry.go:31] will retry after 275.486152ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:10:37.788134  447541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.109303072s)
	I0920 18:10:37.790643  447541 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-610387 service yakd-dashboard -n yakd-dashboard
	
	W0920 18:10:37.821331  447541 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 18:10:37.868982  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:38.064094  447541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:10:38.264319  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:38.337621  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:38.398118  447541 pod_ready.go:103] pod "coredns-7c65d6cfc9-9smxp" in "kube-system" namespace has status "Ready":"False"
	I0920 18:10:38.536040  447541 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.887825528s)
	I0920 18:10:38.538699  447541 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 18:10:38.540803  447541 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:10:38.543111  447541 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 18:10:38.543142  447541 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 18:10:38.550694  447541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.217340908s)
	I0920 18:10:38.550797  447541 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-610387"
	I0920 18:10:38.555234  447541 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 18:10:38.559109  447541 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 18:10:38.563979  447541 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 18:10:38.564048  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:38.634974  447541 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 18:10:38.635052  447541 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 18:10:38.688100  447541 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:10:38.688173  447541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 18:10:38.720565  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:38.763916  447541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:10:38.811489  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:39.075734  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:39.219355  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:39.310870  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:39.569767  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:39.719838  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:39.799624  447541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.735433582s)
	I0920 18:10:39.821302  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:40.027019  447541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.263005537s)
	I0920 18:10:40.030294  447541 addons.go:475] Verifying addon gcp-auth=true in "addons-610387"
	I0920 18:10:40.049711  447541 out.go:177] * Verifying gcp-auth addon...
	I0920 18:10:40.055197  447541 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 18:10:40.059559  447541 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 18:10:40.064993  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:40.219669  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:40.311522  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:40.563544  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:40.719157  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:40.810860  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:40.876206  447541 pod_ready.go:93] pod "coredns-7c65d6cfc9-9smxp" in "kube-system" namespace has status "Ready":"True"
	I0920 18:10:40.876279  447541 pod_ready.go:82] duration metric: took 11.00702619s for pod "coredns-7c65d6cfc9-9smxp" in "kube-system" namespace to be "Ready" ...
	I0920 18:10:40.876322  447541 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zbm2s" in "kube-system" namespace to be "Ready" ...
	I0920 18:10:40.878663  447541 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-zbm2s" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-zbm2s" not found
	I0920 18:10:40.878745  447541 pod_ready.go:82] duration metric: took 2.395594ms for pod "coredns-7c65d6cfc9-zbm2s" in "kube-system" namespace to be "Ready" ...
	E0920 18:10:40.878772  447541 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-zbm2s" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-zbm2s" not found
	I0920 18:10:40.878810  447541 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-610387" in "kube-system" namespace to be "Ready" ...
	I0920 18:10:40.884804  447541 pod_ready.go:93] pod "etcd-addons-610387" in "kube-system" namespace has status "Ready":"True"
	I0920 18:10:40.884883  447541 pod_ready.go:82] duration metric: took 6.046338ms for pod "etcd-addons-610387" in "kube-system" namespace to be "Ready" ...
	I0920 18:10:40.884915  447541 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-610387" in "kube-system" namespace to be "Ready" ...
	I0920 18:10:40.891071  447541 pod_ready.go:93] pod "kube-apiserver-addons-610387" in "kube-system" namespace has status "Ready":"True"
	I0920 18:10:40.891142  447541 pod_ready.go:82] duration metric: took 6.190323ms for pod "kube-apiserver-addons-610387" in "kube-system" namespace to be "Ready" ...
	I0920 18:10:40.891177  447541 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-610387" in "kube-system" namespace to be "Ready" ...
	I0920 18:10:40.897298  447541 pod_ready.go:93] pod "kube-controller-manager-addons-610387" in "kube-system" namespace has status "Ready":"True"
	I0920 18:10:40.897369  447541 pod_ready.go:82] duration metric: took 6.169096ms for pod "kube-controller-manager-addons-610387" in "kube-system" namespace to be "Ready" ...
	I0920 18:10:40.897395  447541 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-82p2g" in "kube-system" namespace to be "Ready" ...
	I0920 18:10:41.070215  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:41.073454  447541 pod_ready.go:93] pod "kube-proxy-82p2g" in "kube-system" namespace has status "Ready":"True"
	I0920 18:10:41.073489  447541 pod_ready.go:82] duration metric: took 176.072179ms for pod "kube-proxy-82p2g" in "kube-system" namespace to be "Ready" ...
	I0920 18:10:41.073502  447541 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-610387" in "kube-system" namespace to be "Ready" ...
	I0920 18:10:41.219845  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:41.321005  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:41.473817  447541 pod_ready.go:93] pod "kube-scheduler-addons-610387" in "kube-system" namespace has status "Ready":"True"
	I0920 18:10:41.473848  447541 pod_ready.go:82] duration metric: took 400.336854ms for pod "kube-scheduler-addons-610387" in "kube-system" namespace to be "Ready" ...
	I0920 18:10:41.473859  447541 pod_ready.go:39] duration metric: took 11.620505063s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:10:41.473876  447541 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:10:41.473942  447541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:10:41.490009  447541 api_server.go:72] duration metric: took 13.962850759s to wait for apiserver process to appear ...
	I0920 18:10:41.490036  447541 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:10:41.490058  447541 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 18:10:41.499267  447541 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 18:10:41.500307  447541 api_server.go:141] control plane version: v1.31.1
	I0920 18:10:41.500335  447541 api_server.go:131] duration metric: took 10.291411ms to wait for apiserver health ...
	I0920 18:10:41.500344  447541 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:10:41.563351  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:41.681187  447541 system_pods.go:59] 18 kube-system pods found
	I0920 18:10:41.681281  447541 system_pods.go:61] "coredns-7c65d6cfc9-9smxp" [121a8505-3eee-4f3d-9d1e-bdef568511ab] Running
	I0920 18:10:41.681306  447541 system_pods.go:61] "csi-hostpath-attacher-0" [75227892-ca33-4cfe-9104-0f22c6b45691] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 18:10:41.681350  447541 system_pods.go:61] "csi-hostpath-resizer-0" [d22dfd59-6373-45f8-92ea-6b3ea10bd7de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 18:10:41.681382  447541 system_pods.go:61] "csi-hostpathplugin-ntl7w" [61e05e9a-c240-4d48-acdb-619a9361b7cf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 18:10:41.681406  447541 system_pods.go:61] "etcd-addons-610387" [90045249-b97c-4b78-9b85-dad6bc0bb553] Running
	I0920 18:10:41.681431  447541 system_pods.go:61] "kindnet-xpm4p" [60773596-4cc3-48dd-ab1e-2cadfa6e0c22] Running
	I0920 18:10:41.681462  447541 system_pods.go:61] "kube-apiserver-addons-610387" [e107d837-cc55-4a54-8081-8729adbe07c8] Running
	I0920 18:10:41.681497  447541 system_pods.go:61] "kube-controller-manager-addons-610387" [9391c5b8-e906-46db-862e-18cdff92add5] Running
	I0920 18:10:41.681526  447541 system_pods.go:61] "kube-ingress-dns-minikube" [f40b26cb-710f-4243-9574-74842601e221] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0920 18:10:41.681548  447541 system_pods.go:61] "kube-proxy-82p2g" [7a149aed-03b2-4ab4-a55a-0fca07fc4f8e] Running
	I0920 18:10:41.681579  447541 system_pods.go:61] "kube-scheduler-addons-610387" [3812950d-dee6-47b3-8e13-45dad96d0d5e] Running
	I0920 18:10:41.681607  447541 system_pods.go:61] "metrics-server-84c5f94fbc-9wbd9" [3587d5bc-bb15-4e63-b7cd-762e145e267d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:10:41.681629  447541 system_pods.go:61] "nvidia-device-plugin-daemonset-4s278" [7727b064-5967-4908-ac7f-230413845569] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0920 18:10:41.681654  447541 system_pods.go:61] "registry-66c9cd494c-qjm6z" [1c16fedd-152a-4247-a39f-773f4b51b9ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 18:10:41.681688  447541 system_pods.go:61] "registry-proxy-qmjm7" [a27eb90b-d7ec-4ce6-8bc9-84bbee5a6d13] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 18:10:41.681719  447541 system_pods.go:61] "snapshot-controller-56fcc65765-7bkpd" [b247bd33-924e-4ee2-a571-25a69c438aad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:10:41.681743  447541 system_pods.go:61] "snapshot-controller-56fcc65765-hpf4g" [8b3f0e2b-5188-4fd8-b8f4-aa6273522a15] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:10:41.681776  447541 system_pods.go:61] "storage-provisioner" [93c8379d-668b-4dbb-be7b-f1c99163f5ad] Running
	I0920 18:10:41.681813  447541 system_pods.go:74] duration metric: took 181.461238ms to wait for pod list to return data ...
	I0920 18:10:41.681837  447541 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:10:41.719655  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:41.812391  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:41.873216  447541 default_sa.go:45] found service account: "default"
	I0920 18:10:41.873246  447541 default_sa.go:55] duration metric: took 191.388945ms for default service account to be created ...
	I0920 18:10:41.873257  447541 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:10:42.065281  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:42.085121  447541 system_pods.go:86] 18 kube-system pods found
	I0920 18:10:42.085158  447541 system_pods.go:89] "coredns-7c65d6cfc9-9smxp" [121a8505-3eee-4f3d-9d1e-bdef568511ab] Running
	I0920 18:10:42.085170  447541 system_pods.go:89] "csi-hostpath-attacher-0" [75227892-ca33-4cfe-9104-0f22c6b45691] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 18:10:42.085178  447541 system_pods.go:89] "csi-hostpath-resizer-0" [d22dfd59-6373-45f8-92ea-6b3ea10bd7de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 18:10:42.085188  447541 system_pods.go:89] "csi-hostpathplugin-ntl7w" [61e05e9a-c240-4d48-acdb-619a9361b7cf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 18:10:42.085192  447541 system_pods.go:89] "etcd-addons-610387" [90045249-b97c-4b78-9b85-dad6bc0bb553] Running
	I0920 18:10:42.085198  447541 system_pods.go:89] "kindnet-xpm4p" [60773596-4cc3-48dd-ab1e-2cadfa6e0c22] Running
	I0920 18:10:42.085203  447541 system_pods.go:89] "kube-apiserver-addons-610387" [e107d837-cc55-4a54-8081-8729adbe07c8] Running
	I0920 18:10:42.085210  447541 system_pods.go:89] "kube-controller-manager-addons-610387" [9391c5b8-e906-46db-862e-18cdff92add5] Running
	I0920 18:10:42.085217  447541 system_pods.go:89] "kube-ingress-dns-minikube" [f40b26cb-710f-4243-9574-74842601e221] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0920 18:10:42.085225  447541 system_pods.go:89] "kube-proxy-82p2g" [7a149aed-03b2-4ab4-a55a-0fca07fc4f8e] Running
	I0920 18:10:42.085230  447541 system_pods.go:89] "kube-scheduler-addons-610387" [3812950d-dee6-47b3-8e13-45dad96d0d5e] Running
	I0920 18:10:42.085236  447541 system_pods.go:89] "metrics-server-84c5f94fbc-9wbd9" [3587d5bc-bb15-4e63-b7cd-762e145e267d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 18:10:42.085250  447541 system_pods.go:89] "nvidia-device-plugin-daemonset-4s278" [7727b064-5967-4908-ac7f-230413845569] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0920 18:10:42.085257  447541 system_pods.go:89] "registry-66c9cd494c-qjm6z" [1c16fedd-152a-4247-a39f-773f4b51b9ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 18:10:42.085274  447541 system_pods.go:89] "registry-proxy-qmjm7" [a27eb90b-d7ec-4ce6-8bc9-84bbee5a6d13] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 18:10:42.085287  447541 system_pods.go:89] "snapshot-controller-56fcc65765-7bkpd" [b247bd33-924e-4ee2-a571-25a69c438aad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:10:42.085293  447541 system_pods.go:89] "snapshot-controller-56fcc65765-hpf4g" [8b3f0e2b-5188-4fd8-b8f4-aa6273522a15] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 18:10:42.085298  447541 system_pods.go:89] "storage-provisioner" [93c8379d-668b-4dbb-be7b-f1c99163f5ad] Running
	I0920 18:10:42.085310  447541 system_pods.go:126] duration metric: took 212.047723ms to wait for k8s-apps to be running ...
	I0920 18:10:42.085323  447541 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:10:42.085382  447541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:10:42.103616  447541 system_svc.go:56] duration metric: took 18.281104ms WaitForService to wait for kubelet
	I0920 18:10:42.103660  447541 kubeadm.go:582] duration metric: took 14.576508203s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:10:42.103685  447541 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:10:42.223887  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:42.279711  447541 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 18:10:42.279764  447541 node_conditions.go:123] node cpu capacity is 2
	I0920 18:10:42.279778  447541 node_conditions.go:105] duration metric: took 176.085758ms to run NodePressure ...
	I0920 18:10:42.279793  447541 start.go:241] waiting for startup goroutines ...
	I0920 18:10:42.324980  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:42.570232  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:42.722267  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:42.814879  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:43.065146  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:43.219368  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:43.310500  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:43.563251  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:43.718865  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:43.811424  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:44.064041  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:44.219533  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:44.320440  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:44.563564  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:44.722102  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:44.810962  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:45.110222  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:45.220587  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:45.312448  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:45.564309  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:45.719282  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:45.820855  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:46.065404  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:46.225261  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:46.311089  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:46.563667  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:46.719602  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:46.811760  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:47.068931  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:47.220274  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:47.310638  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:47.578781  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:47.719038  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:47.812097  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:48.066345  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:48.219712  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:48.311554  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:48.563756  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:48.719453  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:48.810361  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:49.064791  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:49.219671  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:49.315779  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:49.563438  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:49.719323  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:49.812401  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:50.064136  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:50.219372  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:50.311274  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:50.565910  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:50.719193  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:50.821908  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:51.063513  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:51.219580  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:51.320897  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:51.564715  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:51.719690  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:51.811214  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:52.063769  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:52.219055  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:52.311489  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:52.566821  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:52.719466  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:52.810618  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:53.064403  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:53.220116  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:53.321428  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:53.565516  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:53.719639  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:53.811423  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:54.063919  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:54.219857  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:54.311485  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:54.563660  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:54.719551  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:54.811124  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:55.087901  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:55.221038  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:55.310867  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:55.563902  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:55.719707  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:55.810358  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:56.064806  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:56.218635  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:56.311111  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:56.584404  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:56.801841  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:56.874454  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:57.064349  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:57.219066  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:10:57.311391  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:57.572208  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:57.719118  447541 kapi.go:107] duration metric: took 25.003901455s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 18:10:57.812187  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:58.063988  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:58.311593  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:58.569624  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:58.811120  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:59.064408  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:59.310434  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:10:59.564782  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:10:59.811195  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:00.118936  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:00.350417  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:00.564835  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:00.814973  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:01.064457  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:01.311154  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:01.665469  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:01.810905  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:02.068495  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:02.311085  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:02.565293  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:02.811743  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:03.162238  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:03.310906  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:03.567737  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:03.811675  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:04.064699  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:04.311263  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:04.563945  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:04.810783  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:05.072038  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:05.312424  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:05.563881  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:05.811635  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:06.064886  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:06.311214  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:06.563822  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:06.811903  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:07.063374  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:07.311270  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:07.564365  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:07.811087  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:08.064238  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:08.311609  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:08.565349  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:08.811300  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:09.065220  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:09.310882  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:09.563263  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:09.812305  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:10.085942  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:10.311078  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:10.564489  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:10.811501  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:11.064083  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:11.310991  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:11.566566  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:11.812712  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:12.066083  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:12.310501  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:12.567927  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:12.812702  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:13.066606  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:13.312055  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:13.568896  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:13.811531  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:14.064860  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:14.311917  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:14.566280  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:14.811262  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:15.068533  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:15.310652  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:15.566883  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:15.814469  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:16.065905  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:16.312413  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:16.564793  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:16.811726  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:17.063953  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:17.312440  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:17.564455  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:17.811679  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:18.067305  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:18.310991  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:18.564896  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:18.826821  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:19.063756  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:19.310983  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:19.564024  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:19.810672  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:20.065421  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:20.311549  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:20.564066  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:20.810597  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:21.065260  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:21.311610  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:21.563912  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:21.813868  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:22.064021  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:22.310988  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:22.564676  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:22.812884  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:23.064187  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:23.311081  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:23.665945  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:23.811862  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:24.064074  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:24.318483  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:24.564074  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:24.812563  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:25.070958  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:25.311266  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:25.564977  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:25.814419  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:26.070061  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:26.311187  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:26.564324  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:26.815195  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:27.064405  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:27.310430  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:27.563548  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:27.817289  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:28.064217  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:28.311314  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:28.571166  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:28.810362  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:29.063909  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:29.311833  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:29.564309  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:11:29.810750  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:30.097687  447541 kapi.go:107] duration metric: took 51.538573224s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 18:11:30.311206  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:30.811562  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:31.311573  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:31.811274  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:32.310892  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:32.810919  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:33.311413  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:33.811079  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:34.311681  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:34.811252  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:35.310470  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:35.811440  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:36.311040  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:36.810792  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:37.310224  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:37.810590  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:38.311675  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:38.811996  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:39.311020  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:39.811325  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:40.311087  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:40.811364  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:41.312551  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:41.810944  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:42.313696  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:42.810173  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:43.311674  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:43.817458  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:44.316938  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:44.811196  447541 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:11:45.312070  447541 kapi.go:107] duration metric: took 1m10.505798546s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 18:12:03.059540  447541 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 18:12:03.059565  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:03.559544  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:04.060053  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:04.558770  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:05.061571  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:05.559934  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:06.059148  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:06.558831  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:07.060925  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:07.559319  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:08.059527  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:08.559721  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:09.058786  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:09.559100  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:10.062620  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:10.559993  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:11.059313  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:11.559700  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:12.059718  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:12.558625  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:13.060372  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:13.559717  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:14.058574  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:14.559646  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:15.060347  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:15.559007  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:16.059409  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:16.559796  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:17.059449  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:17.559694  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:18.058819  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:18.559015  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:19.058748  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:19.559611  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:20.060209  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:20.558984  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:21.059632  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:21.559286  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:22.059549  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:22.559681  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:23.059606  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:23.560461  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:24.060180  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:24.559627  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:25.059823  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:25.559592  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:26.059295  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:26.558691  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:27.059139  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:27.558658  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:28.059331  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:28.559338  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:29.058641  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:29.558642  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:30.091738  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:30.559717  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:31.058515  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:31.559407  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:32.059214  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:32.559188  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:33.058723  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:33.559594  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:34.058698  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:34.558881  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:35.062024  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:35.559133  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:36.059072  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:36.558992  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:37.059333  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:37.558954  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:38.059224  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:38.559072  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:39.058956  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:39.559326  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:40.061126  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:40.558984  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:41.059750  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:41.558868  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:42.059048  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:42.558815  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:43.062515  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:43.559837  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:44.064604  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:44.559576  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:45.076804  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:45.558581  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:46.059092  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:46.559436  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:47.058507  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:47.559249  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:48.059603  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:48.559505  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:49.059663  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:49.559114  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:50.059680  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:50.560352  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:51.059715  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:51.560326  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:52.058504  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:52.559704  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:53.059510  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:53.559761  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:54.059273  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:54.559493  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:55.060561  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:55.558558  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:56.059662  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:56.559551  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:57.059230  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:57.558468  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:58.059450  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:58.559842  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:59.058654  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:12:59.558448  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:00.082101  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:00.559777  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:01.059702  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:01.559759  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:02.058478  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:02.560048  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:03.059409  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:03.559806  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:04.059984  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:04.560516  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:05.059870  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:05.558218  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:06.059733  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:06.559237  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:07.058524  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:07.559502  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:08.059722  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:08.558988  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:09.059364  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:09.559193  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:10.065038  447541 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:13:10.560519  447541 kapi.go:107] duration metric: took 2m30.505324407s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 18:13:10.562690  447541 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-610387 cluster.
	I0920 18:13:10.564879  447541 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 18:13:10.566939  447541 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 18:13:10.569180  447541 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, volcano, ingress-dns, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 18:13:10.571123  447541 addons.go:510] duration metric: took 2m43.043537063s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner volcano ingress-dns metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 18:13:10.571179  447541 start.go:246] waiting for cluster config update ...
	I0920 18:13:10.571202  447541 start.go:255] writing updated cluster config ...
	I0920 18:13:10.571501  447541 ssh_runner.go:195] Run: rm -f paused
	I0920 18:13:10.907805  447541 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:13:10.910804  447541 out.go:177] * Done! kubectl is now configured to use "addons-610387" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	4704eda8b63e8       4f725bf50aaa5       2 minutes ago       Exited              gadget                                   5                   164e20c093f6e       gadget-2s8f2
	b724ba56be796       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   c6022d64cd6dd       gcp-auth-89d5ffd79-bvs7q
	50611be60d667       8b46b1cd48760       4 minutes ago       Running             admission                                0                   7e2f1e885e9ba       volcano-admission-77d7d48b68-868xm
	1cfa37d12a525       289a818c8d9c5       4 minutes ago       Running             controller                               0                   2a5f5f05978ae       ingress-nginx-controller-bc57996ff-45khg
	3a99f6a40f605       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   12b3f29960059       csi-hostpathplugin-ntl7w
	97be30de38962       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   12b3f29960059       csi-hostpathplugin-ntl7w
	abcbf4512a764       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   12b3f29960059       csi-hostpathplugin-ntl7w
	61ac26473f2c1       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   12b3f29960059       csi-hostpathplugin-ntl7w
	4079c11f1158e       420193b27261a       5 minutes ago       Exited              patch                                    2                   bed283da2f806       ingress-nginx-admission-patch-dh7cf
	729b07920c8ac       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   12b3f29960059       csi-hostpathplugin-ntl7w
	568eb777c5467       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   79a9e3b53cb21       csi-hostpath-attacher-0
	e26bc1071e9fd       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   12b3f29960059       csi-hostpathplugin-ntl7w
	1fa1c8085b12b       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   e1a2788d77fab       csi-hostpath-resizer-0
	588f736128737       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   4aae3b5895715       volcano-scheduler-576bc46687-96kkn
	0d73570a4f032       420193b27261a       5 minutes ago       Exited              create                                   0                   44e57fc1ee601       ingress-nginx-admission-create-q8tm9
	280163a51ca7e       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   5ac7c7341b007       volcano-controllers-56675bb4d5-qlxrw
	47ba69bb05b96       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   7190fd98577f6       metrics-server-84c5f94fbc-9wbd9
	4387e4a6ef633       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   e977c30515d93       local-path-provisioner-86d989889c-hd7sb
	ae3fb13e2ab67       77bdba588b953       5 minutes ago       Running             yakd                                     0                   f0434dc919dbb       yakd-dashboard-67d98fc6b-fbls6
	0395d4c565a46       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   af5f4ea602ff5       snapshot-controller-56fcc65765-hpf4g
	4356c536e8781       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   ebd3ae25ff94b       snapshot-controller-56fcc65765-7bkpd
	0a5b50e953948       be9cac3585579       5 minutes ago       Running             cloud-spanner-emulator                   0                   d797a6b5c79d0       cloud-spanner-emulator-5b584cc74-bpfcl
	6e3b170b34119       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   33c6cccc71a53       registry-proxy-qmjm7
	3ee6b9eebb9f5       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   2aa98426ab37e       registry-66c9cd494c-qjm6z
	4fddd536093c8       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   d5fd727cb123f       nvidia-device-plugin-daemonset-4s278
	03c94fb9e8e3e       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   d2220f2731d6f       kube-ingress-dns-minikube
	06afe51216cbf       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   d245fbe166b5f       coredns-7c65d6cfc9-9smxp
	46f629b83c1ff       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   970b547d0baf0       storage-provisioner
	161a09fceb021       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   823c8b75530b1       kindnet-xpm4p
	9a742dc73ccf2       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   a24b950a6d0ac       kube-proxy-82p2g
	6cb7add4e9518       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   78c16c7584162       kube-controller-manager-addons-610387
	167230c109c45       27e3830e14027       6 minutes ago       Running             etcd                                     0                   cda98420d7aac       etcd-addons-610387
	82b71900840a2       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   61b09864ff708       kube-scheduler-addons-610387
	3161150392c5f       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   b0e6ac871ea6d       kube-apiserver-addons-610387
	
	
	==> containerd <==
	Sep 20 18:14:15 addons-610387 containerd[816]: time="2024-09-20T18:14:15.270645821Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 20 18:14:15 addons-610387 containerd[816]: time="2024-09-20T18:14:15.274112499Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 126.781475ms"
	Sep 20 18:14:15 addons-610387 containerd[816]: time="2024-09-20T18:14:15.274158136Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
	Sep 20 18:14:15 addons-610387 containerd[816]: time="2024-09-20T18:14:15.276375931Z" level=info msg="CreateContainer within sandbox \"164e20c093f6e9c564b7420e7109bb622bab66e30e54d6609a5f98b32afff067\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Sep 20 18:14:15 addons-610387 containerd[816]: time="2024-09-20T18:14:15.299673607Z" level=info msg="CreateContainer within sandbox \"164e20c093f6e9c564b7420e7109bb622bab66e30e54d6609a5f98b32afff067\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"4704eda8b63e8d43ef792d851a1274267b908664c4774f7dd9f1169c4d8aaac4\""
	Sep 20 18:14:15 addons-610387 containerd[816]: time="2024-09-20T18:14:15.300442862Z" level=info msg="StartContainer for \"4704eda8b63e8d43ef792d851a1274267b908664c4774f7dd9f1169c4d8aaac4\""
	Sep 20 18:14:15 addons-610387 containerd[816]: time="2024-09-20T18:14:15.354409343Z" level=info msg="StartContainer for \"4704eda8b63e8d43ef792d851a1274267b908664c4774f7dd9f1169c4d8aaac4\" returns successfully"
	Sep 20 18:14:16 addons-610387 containerd[816]: time="2024-09-20T18:14:16.748028983Z" level=error msg="ExecSync for \"4704eda8b63e8d43ef792d851a1274267b908664c4774f7dd9f1169c4d8aaac4\" failed" error="failed to exec in container: failed to start exec \"ed9d24b4b5e3a41bfc3565955bca090ca3ccd2cf5dd7d16c391bda6f1568e0b2\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 20 18:14:16 addons-610387 containerd[816]: time="2024-09-20T18:14:16.771416826Z" level=error msg="ExecSync for \"4704eda8b63e8d43ef792d851a1274267b908664c4774f7dd9f1169c4d8aaac4\" failed" error="failed to exec in container: failed to start exec \"6c17aebab558f11d40ea531145da7751686c05c418e887fb13a65ccc8e36b61d\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 20 18:14:16 addons-610387 containerd[816]: time="2024-09-20T18:14:16.798054828Z" level=error msg="ExecSync for \"4704eda8b63e8d43ef792d851a1274267b908664c4774f7dd9f1169c4d8aaac4\" failed" error="failed to exec in container: failed to start exec \"ec3669f0bb47de757766e23089ca228107bcd50b1a2a1283ac1dfb6dc622bb2f\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 20 18:14:16 addons-610387 containerd[816]: time="2024-09-20T18:14:16.907110992Z" level=info msg="shim disconnected" id=4704eda8b63e8d43ef792d851a1274267b908664c4774f7dd9f1169c4d8aaac4 namespace=k8s.io
	Sep 20 18:14:16 addons-610387 containerd[816]: time="2024-09-20T18:14:16.907194636Z" level=warning msg="cleaning up after shim disconnected" id=4704eda8b63e8d43ef792d851a1274267b908664c4774f7dd9f1169c4d8aaac4 namespace=k8s.io
	Sep 20 18:14:16 addons-610387 containerd[816]: time="2024-09-20T18:14:16.907205467Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 20 18:14:17 addons-610387 containerd[816]: time="2024-09-20T18:14:17.407677029Z" level=info msg="RemoveContainer for \"a4d80c6569e4cd4dd6b37551afc4836df1f0199a31484adc802b0187d458f3d7\""
	Sep 20 18:14:17 addons-610387 containerd[816]: time="2024-09-20T18:14:17.421997669Z" level=info msg="RemoveContainer for \"a4d80c6569e4cd4dd6b37551afc4836df1f0199a31484adc802b0187d458f3d7\" returns successfully"
	Sep 20 18:14:22 addons-610387 containerd[816]: time="2024-09-20T18:14:22.272474842Z" level=info msg="RemoveContainer for \"fb65e0bdcd373c63f286e33bddac08baa069414082b345372c4bced88ccb98b9\""
	Sep 20 18:14:22 addons-610387 containerd[816]: time="2024-09-20T18:14:22.279101495Z" level=info msg="RemoveContainer for \"fb65e0bdcd373c63f286e33bddac08baa069414082b345372c4bced88ccb98b9\" returns successfully"
	Sep 20 18:14:22 addons-610387 containerd[816]: time="2024-09-20T18:14:22.281010249Z" level=info msg="StopPodSandbox for \"ad32b25cda9d0f287d7c9eeb862aad9527f3f909f19cd97437a6304839a2f05e\""
	Sep 20 18:14:22 addons-610387 containerd[816]: time="2024-09-20T18:14:22.290540907Z" level=info msg="TearDown network for sandbox \"ad32b25cda9d0f287d7c9eeb862aad9527f3f909f19cd97437a6304839a2f05e\" successfully"
	Sep 20 18:14:22 addons-610387 containerd[816]: time="2024-09-20T18:14:22.290747687Z" level=info msg="StopPodSandbox for \"ad32b25cda9d0f287d7c9eeb862aad9527f3f909f19cd97437a6304839a2f05e\" returns successfully"
	Sep 20 18:14:22 addons-610387 containerd[816]: time="2024-09-20T18:14:22.291419226Z" level=info msg="RemovePodSandbox for \"ad32b25cda9d0f287d7c9eeb862aad9527f3f909f19cd97437a6304839a2f05e\""
	Sep 20 18:14:22 addons-610387 containerd[816]: time="2024-09-20T18:14:22.291595728Z" level=info msg="Forcibly stopping sandbox \"ad32b25cda9d0f287d7c9eeb862aad9527f3f909f19cd97437a6304839a2f05e\""
	Sep 20 18:14:22 addons-610387 containerd[816]: time="2024-09-20T18:14:22.308721263Z" level=info msg="TearDown network for sandbox \"ad32b25cda9d0f287d7c9eeb862aad9527f3f909f19cd97437a6304839a2f05e\" successfully"
	Sep 20 18:14:22 addons-610387 containerd[816]: time="2024-09-20T18:14:22.315203430Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad32b25cda9d0f287d7c9eeb862aad9527f3f909f19cd97437a6304839a2f05e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 20 18:14:22 addons-610387 containerd[816]: time="2024-09-20T18:14:22.315374911Z" level=info msg="RemovePodSandbox \"ad32b25cda9d0f287d7c9eeb862aad9527f3f909f19cd97437a6304839a2f05e\" returns successfully"
	
	
	==> coredns [06afe51216cbf04b3aa75a84507929f2fe892e4852e380dacd19ec1d49c0c03f] <==
	[INFO] 10.244.0.5:38549 - 59364 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011187s
	[INFO] 10.244.0.5:32817 - 5978 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005959297s
	[INFO] 10.244.0.5:32817 - 12638 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006368229s
	[INFO] 10.244.0.5:39073 - 60427 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000197688s
	[INFO] 10.244.0.5:39073 - 20997 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000141343s
	[INFO] 10.244.0.5:45105 - 11693 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00016618s
	[INFO] 10.244.0.5:45105 - 7082 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000040107s
	[INFO] 10.244.0.5:54299 - 16886 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000604437s
	[INFO] 10.244.0.5:54299 - 54773 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000170225s
	[INFO] 10.244.0.5:44715 - 50587 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042832s
	[INFO] 10.244.0.5:44715 - 36766 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000204498s
	[INFO] 10.244.0.5:53829 - 49434 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00204327s
	[INFO] 10.244.0.5:53829 - 17669 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002352188s
	[INFO] 10.244.0.5:34844 - 44706 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083439s
	[INFO] 10.244.0.5:34844 - 21436 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000070262s
	[INFO] 10.244.0.24:39529 - 3383 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000249882s
	[INFO] 10.244.0.24:55684 - 18559 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000130848s
	[INFO] 10.244.0.24:34504 - 24125 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000133458s
	[INFO] 10.244.0.24:35824 - 8861 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108932s
	[INFO] 10.244.0.24:47781 - 26530 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108063s
	[INFO] 10.244.0.24:40177 - 32531 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001398s
	[INFO] 10.244.0.24:41498 - 37156 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002729694s
	[INFO] 10.244.0.24:54362 - 32548 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002433017s
	[INFO] 10.244.0.24:56314 - 59640 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003447494s
	[INFO] 10.244.0.24:36593 - 10704 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002867566s
	
	
	==> describe nodes <==
	Name:               addons-610387
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-610387
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=addons-610387
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_10_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-610387
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-610387"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:10:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-610387
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:16:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:13:26 +0000   Fri, 20 Sep 2024 18:10:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:13:26 +0000   Fri, 20 Sep 2024 18:10:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:13:26 +0000   Fri, 20 Sep 2024 18:10:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:13:26 +0000   Fri, 20 Sep 2024 18:10:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-610387
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 fccb499291d548b7a6239045ca21ba11
	  System UUID:                fe0a0b8d-54a1-460b-a01b-bf688e2e416a
	  Boot ID:                    cfeac633-1b4b-4878-a7d1-bdd76da68a0f
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-bpfcl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  gadget                      gadget-2s8f2                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  gcp-auth                    gcp-auth-89d5ffd79-bvs7q                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-45khg    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m55s
	  kube-system                 coredns-7c65d6cfc9-9smxp                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m2s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 csi-hostpathplugin-ntl7w                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 etcd-addons-610387                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m7s
	  kube-system                 kindnet-xpm4p                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m2s
	  kube-system                 kube-apiserver-addons-610387                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-controller-manager-addons-610387       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-proxy-82p2g                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-scheduler-addons-610387                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 metrics-server-84c5f94fbc-9wbd9             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m56s
	  kube-system                 nvidia-device-plugin-daemonset-4s278        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 registry-66c9cd494c-qjm6z                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 registry-proxy-qmjm7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 snapshot-controller-56fcc65765-7bkpd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 snapshot-controller-56fcc65765-hpf4g        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  local-path-storage          local-path-provisioner-86d989889c-hd7sb     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  volcano-system              volcano-admission-77d7d48b68-868xm          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-controllers-56675bb4d5-qlxrw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-scheduler-576bc46687-96kkn          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-fbls6              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m                     kube-proxy       
	  Normal   NodeHasSufficientMemory  6m14s (x8 over 6m14s)  kubelet          Node addons-610387 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m14s (x7 over 6m14s)  kubelet          Node addons-610387 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m14s (x7 over 6m14s)  kubelet          Node addons-610387 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m7s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m7s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m7s                   kubelet          Node addons-610387 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m7s                   kubelet          Node addons-610387 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m7s                   kubelet          Node addons-610387 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m3s                   node-controller  Node addons-610387 event: Registered Node addons-610387 in Controller
	
	
	==> dmesg <==
	[Sep20 17:29] hrtimer: interrupt took 4627734 ns
	[Sep20 17:41] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep20 17:43] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.012326] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.005861] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.189191] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	
	
	==> etcd [167230c109c450eae731e10dd52ff555dddafeb2c355684e4d8f9717a83ac082] <==
	{"level":"info","ts":"2024-09-20T18:10:16.487606Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T18:10:16.487854Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-20T18:10:16.488012Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-20T18:10:16.489257Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T18:10:16.489370Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T18:10:16.842354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T18:10:16.842581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T18:10:16.842698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-20T18:10:16.842855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T18:10:16.842948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T18:10:16.843055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-20T18:10:16.843135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T18:10:16.846436Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:10:16.848637Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-610387 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:10:16.848786Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:10:16.849293Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:10:16.849503Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:10:16.849618Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:10:16.849718Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:10:16.850635Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:10:16.851753Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-20T18:10:16.858537Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:10:16.868136Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:10:16.867056Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:10:16.868492Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [b724ba56be79692277930fe93fa1b0501639f4200e1c796fe184894210a2ec16] <==
	2024/09/20 18:13:10 GCP Auth Webhook started!
	2024/09/20 18:13:27 Ready to marshal response ...
	2024/09/20 18:13:27 Ready to write response ...
	2024/09/20 18:13:28 Ready to marshal response ...
	2024/09/20 18:13:28 Ready to write response ...
	
	
	==> kernel <==
	 18:16:29 up  1:59,  0 users,  load average: 0.22, 1.40, 2.69
	Linux addons-610387 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [161a09fceb021371ec9c9a06740b0d5bc94a64e6afa5e7cf2d256211f5481c25] <==
	I0920 18:14:29.019242       1 main.go:299] handling current node
	I0920 18:14:39.026422       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:14:39.026646       1 main.go:299] handling current node
	I0920 18:14:49.026447       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:14:49.026487       1 main.go:299] handling current node
	I0920 18:14:59.026535       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:14:59.026570       1 main.go:299] handling current node
	I0920 18:15:09.026382       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:15:09.026420       1 main.go:299] handling current node
	I0920 18:15:19.027791       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:15:19.027892       1 main.go:299] handling current node
	I0920 18:15:29.019470       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:15:29.019507       1 main.go:299] handling current node
	I0920 18:15:39.028233       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:15:39.028268       1 main.go:299] handling current node
	I0920 18:15:49.025372       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:15:49.025409       1 main.go:299] handling current node
	I0920 18:15:59.018774       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:15:59.018809       1 main.go:299] handling current node
	I0920 18:16:09.026419       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:16:09.026457       1 main.go:299] handling current node
	I0920 18:16:19.023996       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:16:19.024034       1 main.go:299] handling current node
	I0920 18:16:29.019305       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:16:29.019340       1 main.go:299] handling current node
	
	
	==> kube-apiserver [3161150392c5f7f7a15844e9677affd72b352435d8fab8f987d7bd9db7893d71] <==
	W0920 18:11:40.671285       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.96.108.202:443: connect: connection refused
	W0920 18:11:41.704505       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.96.108.202:443: connect: connection refused
	W0920 18:11:42.739526       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.96.108.202:443: connect: connection refused
	W0920 18:11:43.012100       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.101.93:443: connect: connection refused
	E0920 18:11:43.012145       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.101.93:443: connect: connection refused" logger="UnhandledError"
	W0920 18:11:43.013844       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.96.108.202:443: connect: connection refused
	W0920 18:11:43.082655       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.101.93:443: connect: connection refused
	E0920 18:11:43.082696       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.101.93:443: connect: connection refused" logger="UnhandledError"
	W0920 18:11:43.084291       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.96.108.202:443: connect: connection refused
	W0920 18:11:43.775028       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.96.108.202:443: connect: connection refused
	W0920 18:11:44.795993       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.96.108.202:443: connect: connection refused
	W0920 18:11:45.889394       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.96.108.202:443: connect: connection refused
	W0920 18:11:46.955057       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.96.108.202:443: connect: connection refused
	W0920 18:11:48.041384       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.96.108.202:443: connect: connection refused
	W0920 18:11:49.088631       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.96.108.202:443: connect: connection refused
	W0920 18:11:50.122471       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.96.108.202:443: connect: connection refused
	W0920 18:11:51.193596       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.96.108.202:443: connect: connection refused
	W0920 18:12:02.928294       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.101.93:443: connect: connection refused
	E0920 18:12:02.928338       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.101.93:443: connect: connection refused" logger="UnhandledError"
	W0920 18:12:43.022171       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.101.93:443: connect: connection refused
	E0920 18:12:43.022225       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.101.93:443: connect: connection refused" logger="UnhandledError"
	W0920 18:12:43.090908       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.101.93:443: connect: connection refused
	E0920 18:12:43.090956       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.101.93:443: connect: connection refused" logger="UnhandledError"
	I0920 18:13:27.453881       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0920 18:13:27.510393       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [6cb7add4e9518e04f3d86fc198afbc76aecf7d24500a8974480a18e6951a60b2] <==
	I0920 18:12:43.047560       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 18:12:43.053803       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 18:12:43.070665       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 18:12:43.103260       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 18:12:43.123325       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 18:12:43.140486       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 18:12:43.152970       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 18:12:44.059790       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 18:12:44.078672       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 18:12:45.287629       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 18:12:45.330208       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 18:12:46.295024       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 18:12:46.301869       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 18:12:46.309161       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 18:12:46.336200       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 18:12:46.345327       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 18:12:46.351299       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 18:13:10.230195       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="14.207482ms"
	I0920 18:13:10.231509       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="51.373µs"
	I0920 18:13:16.021388       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0920 18:13:16.023176       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0920 18:13:16.086752       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0920 18:13:16.090052       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0920 18:13:26.573180       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-610387"
	I0920 18:13:27.171489       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [9a742dc73ccf29b8cf220c9fe9328787f4ba4a11e5ff8c949aab4305a8f6f780] <==
	I0920 18:10:28.536122       1 server_linux.go:66] "Using iptables proxy"
	I0920 18:10:28.651262       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 18:10:28.651345       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:10:28.689926       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 18:10:28.690000       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:10:28.701086       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:10:28.701689       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:10:28.701708       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:10:28.709145       1 config.go:199] "Starting service config controller"
	I0920 18:10:28.709181       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:10:28.709217       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:10:28.709222       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:10:28.711354       1 config.go:328] "Starting node config controller"
	I0920 18:10:28.711367       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:10:28.810232       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:10:28.810294       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:10:28.811864       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [82b71900840a264c077b2e9280e0981ff135cfe6e6640c997f72424d9342fb98] <==
	W0920 18:10:19.591797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 18:10:19.591843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:10:19.591961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 18:10:19.592008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:10:19.592106       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 18:10:19.592150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:10:19.592281       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:10:19.592328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:10:19.592485       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:10:19.592535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:10:19.592652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 18:10:19.592697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:10:19.592804       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:10:19.592848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:10:19.592942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 18:10:19.592987       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 18:10:19.593099       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 18:10:19.593147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:10:19.593192       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 18:10:19.593252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:10:20.667457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:10:20.667807       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:10:20.733905       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 18:10:20.733953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0920 18:10:21.075463       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:14:19 addons-610387 kubelet[1493]: E0920 18:14:19.489486    1493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-2s8f2_gadget(a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37)\"" pod="gadget/gadget-2s8f2" podUID="a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37"
	Sep 20 18:14:22 addons-610387 kubelet[1493]: I0920 18:14:22.270120    1493 scope.go:117] "RemoveContainer" containerID="fb65e0bdcd373c63f286e33bddac08baa069414082b345372c4bced88ccb98b9"
	Sep 20 18:14:31 addons-610387 kubelet[1493]: I0920 18:14:31.145535    1493 scope.go:117] "RemoveContainer" containerID="4704eda8b63e8d43ef792d851a1274267b908664c4774f7dd9f1169c4d8aaac4"
	Sep 20 18:14:31 addons-610387 kubelet[1493]: E0920 18:14:31.145762    1493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-2s8f2_gadget(a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37)\"" pod="gadget/gadget-2s8f2" podUID="a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37"
	Sep 20 18:14:33 addons-610387 kubelet[1493]: I0920 18:14:33.146047    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-qjm6z" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 18:14:42 addons-610387 kubelet[1493]: I0920 18:14:42.147314    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-qmjm7" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 18:14:46 addons-610387 kubelet[1493]: I0920 18:14:46.145957    1493 scope.go:117] "RemoveContainer" containerID="4704eda8b63e8d43ef792d851a1274267b908664c4774f7dd9f1169c4d8aaac4"
	Sep 20 18:14:46 addons-610387 kubelet[1493]: E0920 18:14:46.146154    1493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-2s8f2_gadget(a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37)\"" pod="gadget/gadget-2s8f2" podUID="a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37"
	Sep 20 18:15:00 addons-610387 kubelet[1493]: I0920 18:15:00.152945    1493 scope.go:117] "RemoveContainer" containerID="4704eda8b63e8d43ef792d851a1274267b908664c4774f7dd9f1169c4d8aaac4"
	Sep 20 18:15:00 addons-610387 kubelet[1493]: E0920 18:15:00.164776    1493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-2s8f2_gadget(a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37)\"" pod="gadget/gadget-2s8f2" podUID="a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37"
	Sep 20 18:15:13 addons-610387 kubelet[1493]: I0920 18:15:13.145489    1493 scope.go:117] "RemoveContainer" containerID="4704eda8b63e8d43ef792d851a1274267b908664c4774f7dd9f1169c4d8aaac4"
	Sep 20 18:15:13 addons-610387 kubelet[1493]: E0920 18:15:13.145714    1493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-2s8f2_gadget(a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37)\"" pod="gadget/gadget-2s8f2" podUID="a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37"
	Sep 20 18:15:25 addons-610387 kubelet[1493]: I0920 18:15:25.145652    1493 scope.go:117] "RemoveContainer" containerID="4704eda8b63e8d43ef792d851a1274267b908664c4774f7dd9f1169c4d8aaac4"
	Sep 20 18:15:25 addons-610387 kubelet[1493]: E0920 18:15:25.146438    1493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-2s8f2_gadget(a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37)\"" pod="gadget/gadget-2s8f2" podUID="a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37"
	Sep 20 18:15:37 addons-610387 kubelet[1493]: I0920 18:15:37.146168    1493 scope.go:117] "RemoveContainer" containerID="4704eda8b63e8d43ef792d851a1274267b908664c4774f7dd9f1169c4d8aaac4"
	Sep 20 18:15:37 addons-610387 kubelet[1493]: I0920 18:15:37.146375    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-4s278" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 18:15:37 addons-610387 kubelet[1493]: E0920 18:15:37.147104    1493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-2s8f2_gadget(a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37)\"" pod="gadget/gadget-2s8f2" podUID="a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37"
	Sep 20 18:15:50 addons-610387 kubelet[1493]: I0920 18:15:50.146087    1493 scope.go:117] "RemoveContainer" containerID="4704eda8b63e8d43ef792d851a1274267b908664c4774f7dd9f1169c4d8aaac4"
	Sep 20 18:15:50 addons-610387 kubelet[1493]: E0920 18:15:50.146876    1493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-2s8f2_gadget(a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37)\"" pod="gadget/gadget-2s8f2" podUID="a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37"
	Sep 20 18:16:00 addons-610387 kubelet[1493]: I0920 18:16:00.215483    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-qjm6z" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 18:16:01 addons-610387 kubelet[1493]: I0920 18:16:01.146532    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-qmjm7" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 18:16:05 addons-610387 kubelet[1493]: I0920 18:16:05.145516    1493 scope.go:117] "RemoveContainer" containerID="4704eda8b63e8d43ef792d851a1274267b908664c4774f7dd9f1169c4d8aaac4"
	Sep 20 18:16:05 addons-610387 kubelet[1493]: E0920 18:16:05.145713    1493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-2s8f2_gadget(a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37)\"" pod="gadget/gadget-2s8f2" podUID="a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37"
	Sep 20 18:16:19 addons-610387 kubelet[1493]: I0920 18:16:19.146368    1493 scope.go:117] "RemoveContainer" containerID="4704eda8b63e8d43ef792d851a1274267b908664c4774f7dd9f1169c4d8aaac4"
	Sep 20 18:16:19 addons-610387 kubelet[1493]: E0920 18:16:19.146599    1493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-2s8f2_gadget(a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37)\"" pod="gadget/gadget-2s8f2" podUID="a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37"
	
	
	==> storage-provisioner [46f629b83c1ff6d48754db65a9e22e4313c11b72e574330e951dfbc07826c1b3] <==
	I0920 18:10:33.634674       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:10:33.654121       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:10:33.654190       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:10:33.669662       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:10:33.669745       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"28efd0a9-6925-42e8-811c-c14705ea36c7", APIVersion:"v1", ResourceVersion:"605", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-610387_c77835d1-29ed-4024-bad8-4e1927ad46b9 became leader
	I0920 18:10:33.671082       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-610387_c77835d1-29ed-4024-bad8-4e1927ad46b9!
	I0920 18:10:33.773656       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-610387_c77835d1-29ed-4024-bad8-4e1927ad46b9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-610387 -n addons-610387
helpers_test.go:261: (dbg) Run:  kubectl --context addons-610387 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-q8tm9 ingress-nginx-admission-patch-dh7cf test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-610387 describe pod ingress-nginx-admission-create-q8tm9 ingress-nginx-admission-patch-dh7cf test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-610387 describe pod ingress-nginx-admission-create-q8tm9 ingress-nginx-admission-patch-dh7cf test-job-nginx-0: exit status 1 (93.203921ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-q8tm9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dh7cf" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-610387 describe pod ingress-nginx-admission-create-q8tm9 ingress-nginx-admission-patch-dh7cf test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (378.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-809747 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-809747 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m14.24579557s)

                                                
                                                
-- stdout --
	* [old-k8s-version-809747] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-440039/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-440039/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-809747" primary control-plane node in "old-k8s-version-809747" cluster
	* Pulling base image v0.0.45-1726589491-19662 ...
	* Restarting existing docker container for "old-k8s-version-809747" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-809747 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:00:24.876450  653977 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:00:24.876882  653977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:00:24.876893  653977 out.go:358] Setting ErrFile to fd 2...
	I0920 19:00:24.876899  653977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:00:24.877160  653977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-440039/.minikube/bin
	I0920 19:00:24.877559  653977 out.go:352] Setting JSON to false
	I0920 19:00:24.878520  653977 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9776,"bootTime":1726849049,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 19:00:24.878594  653977 start.go:139] virtualization:  
	I0920 19:00:24.882039  653977 out.go:177] * [old-k8s-version-809747] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 19:00:24.885159  653977 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 19:00:24.885243  653977 notify.go:220] Checking for updates...
	I0920 19:00:24.891460  653977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:00:24.893853  653977 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-440039/kubeconfig
	I0920 19:00:24.896127  653977 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-440039/.minikube
	I0920 19:00:24.898864  653977 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 19:00:24.901732  653977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:00:24.904937  653977 config.go:182] Loaded profile config "old-k8s-version-809747": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0920 19:00:24.907676  653977 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 19:00:24.910092  653977 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:00:24.960859  653977 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:00:24.961000  653977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:00:25.063260  653977 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:68 SystemTime:2024-09-20 19:00:25.050134038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:00:25.063377  653977 docker.go:318] overlay module found
	I0920 19:00:25.066293  653977 out.go:177] * Using the docker driver based on existing profile
	I0920 19:00:25.068491  653977 start.go:297] selected driver: docker
	I0920 19:00:25.068521  653977 start.go:901] validating driver "docker" against &{Name:old-k8s-version-809747 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-809747 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:00:25.068634  653977 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:00:25.069283  653977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:00:25.162128  653977 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:68 SystemTime:2024-09-20 19:00:25.150020358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:00:25.162671  653977 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:00:25.162706  653977 cni.go:84] Creating CNI manager for ""
	I0920 19:00:25.162750  653977 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 19:00:25.162795  653977 start.go:340] cluster config:
	{Name:old-k8s-version-809747 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-809747 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:00:25.165133  653977 out.go:177] * Starting "old-k8s-version-809747" primary control-plane node in "old-k8s-version-809747" cluster
	I0920 19:00:25.167310  653977 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0920 19:00:25.169325  653977 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 19:00:25.171366  653977 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0920 19:00:25.171494  653977 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-440039/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0920 19:00:25.171508  653977 cache.go:56] Caching tarball of preloaded images
	I0920 19:00:25.171666  653977 preload.go:172] Found /home/jenkins/minikube-integration/19679-440039/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 19:00:25.171676  653977 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0920 19:00:25.171841  653977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/config.json ...
	I0920 19:00:25.172205  653977 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	W0920 19:00:25.216073  653977 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 is of wrong architecture
	I0920 19:00:25.216093  653977 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:00:25.216180  653977 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 19:00:25.216204  653977 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 19:00:25.216209  653977 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 19:00:25.216217  653977 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 19:00:25.216222  653977 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 19:00:25.343976  653977 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 19:00:25.344022  653977 cache.go:194] Successfully downloaded all kic artifacts
	I0920 19:00:25.344052  653977 start.go:360] acquireMachinesLock for old-k8s-version-809747: {Name:mk4e6a11f9e9fe3a1d94ce483f3b7f94de9083d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:00:25.344134  653977 start.go:364] duration metric: took 62.153µs to acquireMachinesLock for "old-k8s-version-809747"
	I0920 19:00:25.344155  653977 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:00:25.344161  653977 fix.go:54] fixHost starting: 
	I0920 19:00:25.344464  653977 cli_runner.go:164] Run: docker container inspect old-k8s-version-809747 --format={{.State.Status}}
	I0920 19:00:25.378782  653977 fix.go:112] recreateIfNeeded on old-k8s-version-809747: state=Stopped err=<nil>
	W0920 19:00:25.378810  653977 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:00:25.381392  653977 out.go:177] * Restarting existing docker container for "old-k8s-version-809747" ...
	I0920 19:00:25.383416  653977 cli_runner.go:164] Run: docker start old-k8s-version-809747
	I0920 19:00:25.866897  653977 cli_runner.go:164] Run: docker container inspect old-k8s-version-809747 --format={{.State.Status}}
	I0920 19:00:25.891785  653977 kic.go:430] container "old-k8s-version-809747" state is running.
	I0920 19:00:25.892195  653977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-809747
	I0920 19:00:25.915015  653977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/config.json ...
	I0920 19:00:25.915418  653977 machine.go:93] provisionDockerMachine start ...
	I0920 19:00:25.915594  653977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-809747
	I0920 19:00:25.940635  653977 main.go:141] libmachine: Using SSH client type: native
	I0920 19:00:25.940915  653977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0920 19:00:25.940932  653977 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:00:25.941478  653977 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52218->127.0.0.1:33063: read: connection reset by peer
	I0920 19:00:29.102531  653977 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-809747
	
	I0920 19:00:29.102561  653977 ubuntu.go:169] provisioning hostname "old-k8s-version-809747"
	I0920 19:00:29.102653  653977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-809747
	I0920 19:00:29.137178  653977 main.go:141] libmachine: Using SSH client type: native
	I0920 19:00:29.137429  653977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0920 19:00:29.137442  653977 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-809747 && echo "old-k8s-version-809747" | sudo tee /etc/hostname
	I0920 19:00:29.324628  653977 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-809747
	
	I0920 19:00:29.324745  653977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-809747
	I0920 19:00:29.356196  653977 main.go:141] libmachine: Using SSH client type: native
	I0920 19:00:29.356456  653977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0920 19:00:29.356482  653977 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-809747' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-809747/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-809747' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:00:29.510884  653977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:00:29.510913  653977 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19679-440039/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-440039/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-440039/.minikube}
	I0920 19:00:29.510939  653977 ubuntu.go:177] setting up certificates
	I0920 19:00:29.510949  653977 provision.go:84] configureAuth start
	I0920 19:00:29.511021  653977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-809747
	I0920 19:00:29.548222  653977 provision.go:143] copyHostCerts
	I0920 19:00:29.548291  653977 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-440039/.minikube/key.pem, removing ...
	I0920 19:00:29.548300  653977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-440039/.minikube/key.pem
	I0920 19:00:29.548378  653977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-440039/.minikube/key.pem (1675 bytes)
	I0920 19:00:29.548535  653977 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-440039/.minikube/ca.pem, removing ...
	I0920 19:00:29.548542  653977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-440039/.minikube/ca.pem
	I0920 19:00:29.548573  653977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-440039/.minikube/ca.pem (1082 bytes)
	I0920 19:00:29.548638  653977 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-440039/.minikube/cert.pem, removing ...
	I0920 19:00:29.548643  653977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-440039/.minikube/cert.pem
	I0920 19:00:29.548672  653977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-440039/.minikube/cert.pem (1123 bytes)
	I0920 19:00:29.548725  653977 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-440039/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-809747 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-809747]
	I0920 19:00:29.950083  653977 provision.go:177] copyRemoteCerts
	I0920 19:00:29.950157  653977 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:00:29.950215  653977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-809747
	I0920 19:00:29.967190  653977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/old-k8s-version-809747/id_rsa Username:docker}
	I0920 19:00:30.078064  653977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 19:00:30.126063  653977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 19:00:30.179958  653977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 19:00:30.222932  653977 provision.go:87] duration metric: took 711.963992ms to configureAuth
	I0920 19:00:30.222966  653977 ubuntu.go:193] setting minikube options for container-runtime
	I0920 19:00:30.223203  653977 config.go:182] Loaded profile config "old-k8s-version-809747": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0920 19:00:30.223219  653977 machine.go:96] duration metric: took 4.307784257s to provisionDockerMachine
	I0920 19:00:30.223229  653977 start.go:293] postStartSetup for "old-k8s-version-809747" (driver="docker")
	I0920 19:00:30.223245  653977 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:00:30.223316  653977 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:00:30.223364  653977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-809747
	I0920 19:00:30.256887  653977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/old-k8s-version-809747/id_rsa Username:docker}
	I0920 19:00:30.375920  653977 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:00:30.379715  653977 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 19:00:30.379757  653977 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 19:00:30.379768  653977 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 19:00:30.379784  653977 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 19:00:30.379798  653977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-440039/.minikube/addons for local assets ...
	I0920 19:00:30.379859  653977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-440039/.minikube/files for local assets ...
	I0920 19:00:30.379954  653977 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-440039/.minikube/files/etc/ssl/certs/4467832.pem -> 4467832.pem in /etc/ssl/certs
	I0920 19:00:30.380065  653977 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:00:30.394626  653977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/files/etc/ssl/certs/4467832.pem --> /etc/ssl/certs/4467832.pem (1708 bytes)
	I0920 19:00:30.430948  653977 start.go:296] duration metric: took 207.699465ms for postStartSetup
	I0920 19:00:30.431050  653977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:00:30.431091  653977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-809747
	I0920 19:00:30.459854  653977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/old-k8s-version-809747/id_rsa Username:docker}
	I0920 19:00:30.567781  653977 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 19:00:30.572858  653977 fix.go:56] duration metric: took 5.228688785s for fixHost
	I0920 19:00:30.572887  653977 start.go:83] releasing machines lock for "old-k8s-version-809747", held for 5.228743011s
	I0920 19:00:30.572966  653977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-809747
	I0920 19:00:30.604073  653977 ssh_runner.go:195] Run: cat /version.json
	I0920 19:00:30.604127  653977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-809747
	I0920 19:00:30.604137  653977 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:00:30.604199  653977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-809747
	I0920 19:00:30.642519  653977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/old-k8s-version-809747/id_rsa Username:docker}
	I0920 19:00:30.643213  653977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/old-k8s-version-809747/id_rsa Username:docker}
	I0920 19:00:30.754050  653977 ssh_runner.go:195] Run: systemctl --version
	I0920 19:00:30.925755  653977 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 19:00:30.931937  653977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 19:00:30.973150  653977 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 19:00:30.973231  653977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:00:30.988739  653977 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 19:00:30.988769  653977 start.go:495] detecting cgroup driver to use...
	I0920 19:00:30.988817  653977 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 19:00:30.988887  653977 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 19:00:31.010109  653977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 19:00:31.033753  653977 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:00:31.033872  653977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:00:31.049044  653977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:00:31.063358  653977 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:00:31.228694  653977 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:00:31.391307  653977 docker.go:233] disabling docker service ...
	I0920 19:00:31.391434  653977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:00:31.413123  653977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:00:31.426807  653977 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:00:31.584301  653977 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:00:31.734988  653977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:00:31.756684  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:00:31.784859  653977 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0920 19:00:31.798603  653977 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 19:00:31.817982  653977 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 19:00:31.818104  653977 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 19:00:31.833086  653977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 19:00:31.848398  653977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 19:00:31.859757  653977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 19:00:31.874050  653977 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:00:31.887927  653977 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 19:00:31.904131  653977 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:00:31.916170  653977 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:00:31.931978  653977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:00:32.076726  653977 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 19:00:32.361540  653977 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0920 19:00:32.361687  653977 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0920 19:00:32.369397  653977 start.go:563] Will wait 60s for crictl version
	I0920 19:00:32.369514  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:00:32.382829  653977 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:00:32.486071  653977 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0920 19:00:32.486211  653977 ssh_runner.go:195] Run: containerd --version
	I0920 19:00:32.531190  653977 ssh_runner.go:195] Run: containerd --version
	I0920 19:00:32.585342  653977 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I0920 19:00:32.587700  653977 cli_runner.go:164] Run: docker network inspect old-k8s-version-809747 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 19:00:32.618112  653977 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0920 19:00:32.626780  653977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:00:32.641346  653977 kubeadm.go:883] updating cluster {Name:old-k8s-version-809747 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-809747 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:00:32.641474  653977 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0920 19:00:32.641541  653977 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:00:32.712063  653977 containerd.go:627] all images are preloaded for containerd runtime.
	I0920 19:00:32.712091  653977 containerd.go:534] Images already preloaded, skipping extraction
	I0920 19:00:32.712157  653977 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:00:32.792461  653977 containerd.go:627] all images are preloaded for containerd runtime.
	I0920 19:00:32.792491  653977 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:00:32.792500  653977 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0920 19:00:32.792654  653977 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-809747 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-809747 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:00:32.792729  653977 ssh_runner.go:195] Run: sudo crictl info
	I0920 19:00:32.862290  653977 cni.go:84] Creating CNI manager for ""
	I0920 19:00:32.862337  653977 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 19:00:32.862355  653977 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:00:32.862375  653977 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-809747 NodeName:old-k8s-version-809747 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 19:00:32.862520  653977 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-809747"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:00:32.862597  653977 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 19:00:32.874825  653977 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:00:32.874896  653977 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:00:32.894657  653977 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0920 19:00:32.927900  653977 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:00:32.959991  653977 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0920 19:00:32.995934  653977 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0920 19:00:33.002869  653977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:00:33.020320  653977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:00:33.182353  653977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:00:33.200166  653977 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747 for IP: 192.168.76.2
	I0920 19:00:33.200190  653977 certs.go:194] generating shared ca certs ...
	I0920 19:00:33.200206  653977 certs.go:226] acquiring lock for ca certs: {Name:mk3d7fcf9ade00248d7372a8cec4403eeffc64da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:00:33.200354  653977 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-440039/.minikube/ca.key
	I0920 19:00:33.200404  653977 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-440039/.minikube/proxy-client-ca.key
	I0920 19:00:33.200414  653977 certs.go:256] generating profile certs ...
	I0920 19:00:33.200497  653977 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.key
	I0920 19:00:33.200574  653977 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/apiserver.key.641197a2
	I0920 19:00:33.200623  653977 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/proxy-client.key
	I0920 19:00:33.200735  653977 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/446783.pem (1338 bytes)
	W0920 19:00:33.200769  653977 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-440039/.minikube/certs/446783_empty.pem, impossibly tiny 0 bytes
	I0920 19:00:33.200780  653977 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 19:00:33.200807  653977 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca.pem (1082 bytes)
	I0920 19:00:33.200833  653977 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:00:33.200859  653977 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/key.pem (1675 bytes)
	I0920 19:00:33.200898  653977 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-440039/.minikube/files/etc/ssl/certs/4467832.pem (1708 bytes)
	I0920 19:00:33.201591  653977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:00:33.292630  653977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 19:00:33.371796  653977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:00:33.428964  653977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:00:33.460463  653977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 19:00:33.487940  653977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:00:33.515402  653977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:00:33.542432  653977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:00:33.569270  653977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/certs/446783.pem --> /usr/share/ca-certificates/446783.pem (1338 bytes)
	I0920 19:00:33.595258  653977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/files/etc/ssl/certs/4467832.pem --> /usr/share/ca-certificates/4467832.pem (1708 bytes)
	I0920 19:00:33.629562  653977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:00:33.663984  653977 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:00:33.684244  653977 ssh_runner.go:195] Run: openssl version
	I0920 19:00:33.690656  653977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/446783.pem && ln -fs /usr/share/ca-certificates/446783.pem /etc/ssl/certs/446783.pem"
	I0920 19:00:33.701816  653977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/446783.pem
	I0920 19:00:33.705834  653977 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:20 /usr/share/ca-certificates/446783.pem
	I0920 19:00:33.705979  653977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/446783.pem
	I0920 19:00:33.713430  653977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/446783.pem /etc/ssl/certs/51391683.0"
	I0920 19:00:33.724387  653977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4467832.pem && ln -fs /usr/share/ca-certificates/4467832.pem /etc/ssl/certs/4467832.pem"
	I0920 19:00:33.734937  653977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4467832.pem
	I0920 19:00:33.739058  653977 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:20 /usr/share/ca-certificates/4467832.pem
	I0920 19:00:33.739203  653977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4467832.pem
	I0920 19:00:33.746764  653977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4467832.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:00:33.756872  653977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:00:33.767514  653977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:00:33.771735  653977 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:10 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:00:33.771875  653977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:00:33.779671  653977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:00:33.790276  653977 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:00:33.794565  653977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:00:33.806774  653977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:00:33.831549  653977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:00:33.842801  653977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:00:33.851435  653977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:00:33.859535  653977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:00:33.867065  653977 kubeadm.go:392] StartCluster: {Name:old-k8s-version-809747 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-809747 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:00:33.867228  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0920 19:00:33.867324  653977 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:00:33.925047  653977 cri.go:89] found id: "4142a2b627abf853d590ea7ce75f7deda0b722cae15d402d8283d5eba06bcada"
	I0920 19:00:33.925077  653977 cri.go:89] found id: "6451e639fa627b6f7eb0c0d6d43b8de7a1c9297b75f90eba2d66578a806ba13c"
	I0920 19:00:33.925082  653977 cri.go:89] found id: "19f7587837edd2296001abf813b8f85bc66215e3208e13e9ab0f2b81524e8a9f"
	I0920 19:00:33.925090  653977 cri.go:89] found id: "899a291eb59c3b5f9d6b0939098b577edab6865605118eb9e26163895e023c2d"
	I0920 19:00:33.925093  653977 cri.go:89] found id: "ffeafa6ea90467e9aee01c15062a10d7e7d8deb2522a37b559544d399360d4c5"
	I0920 19:00:33.925097  653977 cri.go:89] found id: "dc58bda86f6344646d9480f42703769ac45b07cf183c47ae95ac99cf89959d32"
	I0920 19:00:33.925100  653977 cri.go:89] found id: "7407be95357aafd1f43bd30ec343dcf522e2b4e01f1263ebcbc21b337b0c8043"
	I0920 19:00:33.925103  653977 cri.go:89] found id: "fb7f2e4033f4a4873ebbbd7a1613734b2ca70f9f331577110431f7b00029efa4"
	I0920 19:00:33.925106  653977 cri.go:89] found id: ""
	I0920 19:00:33.925161  653977 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0920 19:00:33.938762  653977 cri.go:116] JSON = null
	W0920 19:00:33.938811  653977 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0920 19:00:33.938878  653977 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:00:33.949230  653977 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:00:33.949247  653977 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:00:33.949300  653977 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:00:33.958983  653977 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:00:33.959423  653977 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-809747" does not appear in /home/jenkins/minikube-integration/19679-440039/kubeconfig
	I0920 19:00:33.959521  653977 kubeconfig.go:62] /home/jenkins/minikube-integration/19679-440039/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-809747" cluster setting kubeconfig missing "old-k8s-version-809747" context setting]
	I0920 19:00:33.959780  653977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/kubeconfig: {Name:mkc0c275236e567d398d3ba786de8188e8f878bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:00:33.960997  653977 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:00:33.976853  653977 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0920 19:00:33.976938  653977 kubeadm.go:597] duration metric: took 27.684298ms to restartPrimaryControlPlane
	I0920 19:00:33.976962  653977 kubeadm.go:394] duration metric: took 109.906388ms to StartCluster
	I0920 19:00:33.977011  653977 settings.go:142] acquiring lock: {Name:mk1135c1a1ce95063626d6fac03fabf56993cb73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:00:33.977116  653977 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-440039/kubeconfig
	I0920 19:00:33.977850  653977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/kubeconfig: {Name:mkc0c275236e567d398d3ba786de8188e8f878bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:00:33.978165  653977 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0920 19:00:33.978590  653977 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:00:33.978668  653977 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-809747"
	I0920 19:00:33.978684  653977 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-809747"
	W0920 19:00:33.978690  653977 addons.go:243] addon storage-provisioner should already be in state true
	I0920 19:00:33.978715  653977 host.go:66] Checking if "old-k8s-version-809747" exists ...
	I0920 19:00:33.979165  653977 cli_runner.go:164] Run: docker container inspect old-k8s-version-809747 --format={{.State.Status}}
	I0920 19:00:33.979989  653977 config.go:182] Loaded profile config "old-k8s-version-809747": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0920 19:00:33.980175  653977 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-809747"
	I0920 19:00:33.980226  653977 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-809747"
	W0920 19:00:33.980248  653977 addons.go:243] addon metrics-server should already be in state true
	I0920 19:00:33.980299  653977 host.go:66] Checking if "old-k8s-version-809747" exists ...
	I0920 19:00:33.980886  653977 cli_runner.go:164] Run: docker container inspect old-k8s-version-809747 --format={{.State.Status}}
	I0920 19:00:33.981152  653977 addons.go:69] Setting dashboard=true in profile "old-k8s-version-809747"
	I0920 19:00:33.981200  653977 addons.go:234] Setting addon dashboard=true in "old-k8s-version-809747"
	W0920 19:00:33.981240  653977 addons.go:243] addon dashboard should already be in state true
	I0920 19:00:33.981290  653977 host.go:66] Checking if "old-k8s-version-809747" exists ...
	I0920 19:00:33.981870  653977 cli_runner.go:164] Run: docker container inspect old-k8s-version-809747 --format={{.State.Status}}
	I0920 19:00:33.982345  653977 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-809747"
	I0920 19:00:33.982389  653977 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-809747"
	I0920 19:00:33.982755  653977 cli_runner.go:164] Run: docker container inspect old-k8s-version-809747 --format={{.State.Status}}
	I0920 19:00:33.990505  653977 out.go:177] * Verifying Kubernetes components...
	I0920 19:00:33.996404  653977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:00:34.043509  653977 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:00:34.045750  653977 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:00:34.045774  653977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:00:34.045844  653977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-809747
	I0920 19:00:34.066684  653977 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 19:00:34.069094  653977 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:00:34.069122  653977 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:00:34.069209  653977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-809747
	I0920 19:00:34.082618  653977 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-809747"
	W0920 19:00:34.082642  653977 addons.go:243] addon default-storageclass should already be in state true
	I0920 19:00:34.082669  653977 host.go:66] Checking if "old-k8s-version-809747" exists ...
	I0920 19:00:34.083099  653977 cli_runner.go:164] Run: docker container inspect old-k8s-version-809747 --format={{.State.Status}}
	I0920 19:00:34.106400  653977 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0920 19:00:34.112085  653977 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0920 19:00:34.118347  653977 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0920 19:00:34.118385  653977 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0920 19:00:34.118464  653977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-809747
	I0920 19:00:34.150458  653977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/old-k8s-version-809747/id_rsa Username:docker}
	I0920 19:00:34.150997  653977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/old-k8s-version-809747/id_rsa Username:docker}
	I0920 19:00:34.158446  653977 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:00:34.158478  653977 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:00:34.158547  653977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-809747
	I0920 19:00:34.206537  653977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/old-k8s-version-809747/id_rsa Username:docker}
	I0920 19:00:34.222589  653977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:00:34.230708  653977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/old-k8s-version-809747/id_rsa Username:docker}
	I0920 19:00:34.255645  653977 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-809747" to be "Ready" ...
	I0920 19:00:34.357792  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:00:34.386772  653977 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:00:34.386806  653977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 19:00:34.467819  653977 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0920 19:00:34.467873  653977 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0920 19:00:34.485604  653977 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:00:34.485653  653977 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:00:34.509370  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:00:34.569023  653977 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0920 19:00:34.569072  653977 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0920 19:00:34.596263  653977 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:00:34.596305  653977 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0920 19:00:34.678654  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:34.678712  653977 retry.go:31] will retry after 126.39832ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:34.683921  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:00:34.721285  653977 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0920 19:00:34.721318  653977 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0920 19:00:34.806296  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:00:34.844861  653977 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0920 19:00:34.844891  653977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0920 19:00:34.867934  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:34.867971  653977 retry.go:31] will retry after 252.208741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:34.967419  653977 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0920 19:00:34.967459  653977 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0920 19:00:35.046446  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:35.046487  653977 retry.go:31] will retry after 492.197471ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 19:00:35.046535  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:35.046550  653977 retry.go:31] will retry after 148.04313ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:35.054206  653977 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0920 19:00:35.054240  653977 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0920 19:00:35.085036  653977 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0920 19:00:35.085116  653977 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0920 19:00:35.119094  653977 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0920 19:00:35.119172  653977 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0920 19:00:35.120397  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:00:35.192259  653977 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0920 19:00:35.192336  653977 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0920 19:00:35.195659  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:00:35.252239  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0920 19:00:35.313446  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:35.313542  653977 retry.go:31] will retry after 229.08492ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 19:00:35.408922  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:35.409014  653977 retry.go:31] will retry after 425.599307ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 19:00:35.435759  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:35.435848  653977 retry.go:31] will retry after 177.893908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:35.539107  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:00:35.543022  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:00:35.614587  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0920 19:00:35.686829  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:35.686886  653977 retry.go:31] will retry after 632.088543ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 19:00:35.749053  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:35.749120  653977 retry.go:31] will retry after 289.010206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 19:00:35.791967  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:35.792016  653977 retry.go:31] will retry after 241.49281ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:35.835163  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0920 19:00:35.926688  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:35.926721  653977 retry.go:31] will retry after 586.254513ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:36.034061  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0920 19:00:36.038533  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0920 19:00:36.204391  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:36.204423  653977 retry.go:31] will retry after 708.369758ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 19:00:36.218583  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:36.218656  653977 retry.go:31] will retry after 681.59252ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:36.257223  653977 node_ready.go:53] error getting node "old-k8s-version-809747": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-809747": dial tcp 192.168.76.2:8443: connect: connection refused
	I0920 19:00:36.319389  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0920 19:00:36.411828  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:36.411912  653977 retry.go:31] will retry after 500.160533ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:36.514059  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0920 19:00:36.616795  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:36.616879  653977 retry.go:31] will retry after 991.378116ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:36.900754  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:00:36.913168  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0920 19:00:36.913356  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0920 19:00:37.091066  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:37.091165  653977 retry.go:31] will retry after 714.861817ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 19:00:37.133174  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:37.133215  653977 retry.go:31] will retry after 1.136574144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 19:00:37.133286  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:37.133302  653977 retry.go:31] will retry after 520.089132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:37.609338  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:00:37.653755  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0920 19:00:37.750766  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:37.750828  653977 retry.go:31] will retry after 1.571580874s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 19:00:37.802152  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:37.802198  653977 retry.go:31] will retry after 1.845038058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:37.806295  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0920 19:00:37.913625  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:37.913671  653977 retry.go:31] will retry after 2.139714448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:38.270434  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0920 19:00:38.392679  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:38.392714  653977 retry.go:31] will retry after 1.156435371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:38.756567  653977 node_ready.go:53] error getting node "old-k8s-version-809747": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-809747": dial tcp 192.168.76.2:8443: connect: connection refused
	I0920 19:00:39.323521  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0920 19:00:39.415463  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:39.415501  653977 retry.go:31] will retry after 2.58842347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:39.549817  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0920 19:00:39.629281  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:39.629323  653977 retry.go:31] will retry after 2.93109741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:39.647422  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0920 19:00:39.722484  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:39.722519  653977 retry.go:31] will retry after 968.998158ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:40.054150  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0920 19:00:40.140044  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:40.140095  653977 retry.go:31] will retry after 3.740833405s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:40.692417  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0920 19:00:40.757143  653977 node_ready.go:53] error getting node "old-k8s-version-809747": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-809747": dial tcp 192.168.76.2:8443: connect: connection refused
	W0920 19:00:40.767413  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:40.767450  653977 retry.go:31] will retry after 2.752304306s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:42.004149  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0920 19:00:42.126808  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:42.126857  653977 retry.go:31] will retry after 2.333345578s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:42.561621  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0920 19:00:42.803510  653977 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:42.803538  653977 retry.go:31] will retry after 3.797815098s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 19:00:43.256164  653977 node_ready.go:53] error getting node "old-k8s-version-809747": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-809747": dial tcp 192.168.76.2:8443: connect: connection refused
	I0920 19:00:43.520687  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0920 19:00:43.881056  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:00:44.461334  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:00:46.601835  653977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:00:52.536437  653977 node_ready.go:49] node "old-k8s-version-809747" has status "Ready":"True"
	I0920 19:00:52.536460  653977 node_ready.go:38] duration metric: took 18.280780879s for node "old-k8s-version-809747" to be "Ready" ...
	I0920 19:00:52.536470  653977 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:00:52.728957  653977 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-682lc" in "kube-system" namespace to be "Ready" ...
	I0920 19:00:52.839880  653977 pod_ready.go:93] pod "coredns-74ff55c5b-682lc" in "kube-system" namespace has status "Ready":"True"
	I0920 19:00:52.839949  653977 pod_ready.go:82] duration metric: took 110.911205ms for pod "coredns-74ff55c5b-682lc" in "kube-system" namespace to be "Ready" ...
	I0920 19:00:52.839980  653977 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-809747" in "kube-system" namespace to be "Ready" ...
	I0920 19:00:52.869197  653977 pod_ready.go:93] pod "etcd-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"True"
	I0920 19:00:52.869277  653977 pod_ready.go:82] duration metric: took 29.275216ms for pod "etcd-old-k8s-version-809747" in "kube-system" namespace to be "Ready" ...
	I0920 19:00:52.869308  653977 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-809747" in "kube-system" namespace to be "Ready" ...
	I0920 19:00:52.892193  653977 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"True"
	I0920 19:00:52.892270  653977 pod_ready.go:82] duration metric: took 22.926823ms for pod "kube-apiserver-old-k8s-version-809747" in "kube-system" namespace to be "Ready" ...
	I0920 19:00:52.892297  653977 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-809747" in "kube-system" namespace to be "Ready" ...
	I0920 19:00:52.923907  653977 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"True"
	I0920 19:00:52.923982  653977 pod_ready.go:82] duration metric: took 31.662266ms for pod "kube-controller-manager-old-k8s-version-809747" in "kube-system" namespace to be "Ready" ...
	I0920 19:00:52.924009  653977 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tczmb" in "kube-system" namespace to be "Ready" ...
	I0920 19:00:52.931551  653977 pod_ready.go:93] pod "kube-proxy-tczmb" in "kube-system" namespace has status "Ready":"True"
	I0920 19:00:52.931626  653977 pod_ready.go:82] duration metric: took 7.594275ms for pod "kube-proxy-tczmb" in "kube-system" namespace to be "Ready" ...
	I0920 19:00:52.931654  653977 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace to be "Ready" ...
	I0920 19:00:53.662541  653977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.141795624s)
	I0920 19:00:53.662833  653977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.781743549s)
	I0920 19:00:53.663013  653977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.201643979s)
	I0920 19:00:53.663048  653977 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-809747"
	I0920 19:00:53.663136  653977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.061280112s)
	I0920 19:00:53.665669  653977 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-809747 addons enable metrics-server
	
	I0920 19:00:53.676742  653977 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0920 19:00:53.678513  653977 addons.go:510] duration metric: took 19.699917763s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0920 19:00:54.939896  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:00:57.447927  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:00:59.939374  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:01.939611  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:04.437515  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:06.438421  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:08.937500  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:10.949912  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:13.439383  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:15.451209  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:17.470631  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:19.939624  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:22.438287  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:24.438499  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:26.438651  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:28.439050  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:30.937439  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:32.937693  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:34.938249  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:37.438833  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:39.938527  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:41.939457  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:44.438113  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:46.440272  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:48.442691  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:50.941724  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:53.440294  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:55.938941  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:01:58.437924  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:00.487844  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:02.937913  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:04.938688  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:06.945809  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:09.458130  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:11.937956  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:14.442423  653977 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:14.937945  653977 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace has status "Ready":"True"
	I0920 19:02:14.938015  653977 pod_ready.go:82] duration metric: took 1m22.006338563s for pod "kube-scheduler-old-k8s-version-809747" in "kube-system" namespace to be "Ready" ...
	I0920 19:02:14.938035  653977 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace to be "Ready" ...
	I0920 19:02:16.945061  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:19.444950  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:21.944950  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:23.945119  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:26.444388  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:28.446037  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:30.447386  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:32.944913  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:35.444452  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:37.445194  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:39.944849  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:41.951696  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:44.444379  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:46.944741  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:49.444620  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:51.444828  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:53.445196  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:55.945024  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:02:57.945504  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:00.445583  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:02.944534  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:04.944836  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:06.945276  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:09.444417  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:11.444652  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:13.945525  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:16.444243  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:18.944428  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:20.944873  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:22.946834  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:25.445237  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:27.945266  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:29.945583  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:32.443920  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:34.444977  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:36.944307  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:39.445108  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:41.445485  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:43.448412  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:45.945487  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:48.444027  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:50.444433  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:52.444954  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:54.445149  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:56.944770  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:03:58.955345  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:01.445920  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:03.944493  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:05.944570  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:07.945024  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:09.945391  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:12.444159  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:14.951303  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:17.444923  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:19.445237  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:21.944054  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:24.444213  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:26.945720  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:29.443834  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:31.443892  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:33.444779  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:35.945420  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:38.445334  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:40.944760  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:43.444753  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:45.444792  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:47.945457  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:50.444771  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:52.944773  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:55.444132  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:57.944283  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:04:59.945310  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:01.945617  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:04.444334  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:06.444682  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:08.445379  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:10.945058  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:13.446169  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:15.944376  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:17.944493  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:19.945041  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:22.444104  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:24.445307  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:26.945125  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:28.946115  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:31.444732  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:33.445103  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:35.445219  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:37.945636  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:39.950096  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:41.950671  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:44.503978  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:46.945602  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:49.444647  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:51.950151  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:54.445777  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:56.945405  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:05:59.445012  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:01.445966  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:03.944592  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:05.944734  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:07.945215  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:10.444198  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:12.445163  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:14.445421  653977 pod_ready.go:103] pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:06:14.944609  653977 pod_ready.go:82] duration metric: took 4m0.00655808s for pod "metrics-server-9975d5f86-bx26v" in "kube-system" namespace to be "Ready" ...
	E0920 19:06:14.944634  653977 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 19:06:14.944683  653977 pod_ready.go:39] duration metric: took 5m22.408163156s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:06:14.944705  653977 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:06:14.944736  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:14.944807  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:14.984507  653977 cri.go:89] found id: "d106efe91d320b3b45d7444b13bc8682a69b480fd3d2c46ef29ca1522cc0dba7"
	I0920 19:06:14.984579  653977 cri.go:89] found id: "fb7f2e4033f4a4873ebbbd7a1613734b2ca70f9f331577110431f7b00029efa4"
	I0920 19:06:14.984591  653977 cri.go:89] found id: ""
	I0920 19:06:14.984601  653977 logs.go:276] 2 containers: [d106efe91d320b3b45d7444b13bc8682a69b480fd3d2c46ef29ca1522cc0dba7 fb7f2e4033f4a4873ebbbd7a1613734b2ca70f9f331577110431f7b00029efa4]
	I0920 19:06:14.984660  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:14.988230  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:14.991662  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0920 19:06:14.991757  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:15.056844  653977 cri.go:89] found id: "085be7ace136b801ec32172b7f7a4a18032b49fc30e4fc31c9e16a3aeb8fcf70"
	I0920 19:06:15.056930  653977 cri.go:89] found id: "ffeafa6ea90467e9aee01c15062a10d7e7d8deb2522a37b559544d399360d4c5"
	I0920 19:06:15.056964  653977 cri.go:89] found id: ""
	I0920 19:06:15.057009  653977 logs.go:276] 2 containers: [085be7ace136b801ec32172b7f7a4a18032b49fc30e4fc31c9e16a3aeb8fcf70 ffeafa6ea90467e9aee01c15062a10d7e7d8deb2522a37b559544d399360d4c5]
	I0920 19:06:15.057102  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:15.061726  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:15.066115  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0920 19:06:15.066220  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:15.133035  653977 cri.go:89] found id: "812bd46cee424729eb0e399a232c5b3f90764d593c9aca7c0f74a4afe099a327"
	I0920 19:06:15.133072  653977 cri.go:89] found id: "4142a2b627abf853d590ea7ce75f7deda0b722cae15d402d8283d5eba06bcada"
	I0920 19:06:15.133079  653977 cri.go:89] found id: ""
	I0920 19:06:15.133088  653977 logs.go:276] 2 containers: [812bd46cee424729eb0e399a232c5b3f90764d593c9aca7c0f74a4afe099a327 4142a2b627abf853d590ea7ce75f7deda0b722cae15d402d8283d5eba06bcada]
	I0920 19:06:15.133156  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:15.139228  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:15.144195  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:15.144326  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:15.201858  653977 cri.go:89] found id: "2097a062acf828599a57c98a30c658af273f5825500a3bf75e77de707c622677"
	I0920 19:06:15.201884  653977 cri.go:89] found id: "7407be95357aafd1f43bd30ec343dcf522e2b4e01f1263ebcbc21b337b0c8043"
	I0920 19:06:15.201892  653977 cri.go:89] found id: ""
	I0920 19:06:15.201901  653977 logs.go:276] 2 containers: [2097a062acf828599a57c98a30c658af273f5825500a3bf75e77de707c622677 7407be95357aafd1f43bd30ec343dcf522e2b4e01f1263ebcbc21b337b0c8043]
	I0920 19:06:15.201992  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:15.206566  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:15.210255  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:15.210403  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:15.255919  653977 cri.go:89] found id: "e7bf621eab17a86a9873dcbf28cace6f79a68a72ffa3d75afc890ecc389a86ec"
	I0920 19:06:15.255996  653977 cri.go:89] found id: "899a291eb59c3b5f9d6b0939098b577edab6865605118eb9e26163895e023c2d"
	I0920 19:06:15.256018  653977 cri.go:89] found id: ""
	I0920 19:06:15.256050  653977 logs.go:276] 2 containers: [e7bf621eab17a86a9873dcbf28cace6f79a68a72ffa3d75afc890ecc389a86ec 899a291eb59c3b5f9d6b0939098b577edab6865605118eb9e26163895e023c2d]
	I0920 19:06:15.256139  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:15.259706  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:15.263193  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:15.263319  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:15.306689  653977 cri.go:89] found id: "fc97c68baab83ca4a8e34d1add491da97117d8436842e907b6c7ef5194a66548"
	I0920 19:06:15.306759  653977 cri.go:89] found id: "dc58bda86f6344646d9480f42703769ac45b07cf183c47ae95ac99cf89959d32"
	I0920 19:06:15.306782  653977 cri.go:89] found id: ""
	I0920 19:06:15.306798  653977 logs.go:276] 2 containers: [fc97c68baab83ca4a8e34d1add491da97117d8436842e907b6c7ef5194a66548 dc58bda86f6344646d9480f42703769ac45b07cf183c47ae95ac99cf89959d32]
	I0920 19:06:15.306856  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:15.310982  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:15.314414  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:15.314485  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:15.352773  653977 cri.go:89] found id: "ad9741225d44ef3cd93a867e4aedc84a8ed23877c1b05cd8229cfb1260ced7de"
	I0920 19:06:15.352796  653977 cri.go:89] found id: "6451e639fa627b6f7eb0c0d6d43b8de7a1c9297b75f90eba2d66578a806ba13c"
	I0920 19:06:15.352801  653977 cri.go:89] found id: ""
	I0920 19:06:15.352808  653977 logs.go:276] 2 containers: [ad9741225d44ef3cd93a867e4aedc84a8ed23877c1b05cd8229cfb1260ced7de 6451e639fa627b6f7eb0c0d6d43b8de7a1c9297b75f90eba2d66578a806ba13c]
	I0920 19:06:15.352867  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:15.356635  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:15.360379  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:15.360473  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:15.402427  653977 cri.go:89] found id: "a29e29bd825c67d29d71f1b611800e16571bd7e880bbb8077e23f09e4d6b05fd"
	I0920 19:06:15.402457  653977 cri.go:89] found id: ""
	I0920 19:06:15.402465  653977 logs.go:276] 1 containers: [a29e29bd825c67d29d71f1b611800e16571bd7e880bbb8077e23f09e4d6b05fd]
	I0920 19:06:15.402548  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:15.406374  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:06:15.406479  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:06:15.467757  653977 cri.go:89] found id: "78dbadd374e4006947e4c0a4b40642cfe3d16108cbdaad7fa8a335c2eadfccc6"
	I0920 19:06:15.467777  653977 cri.go:89] found id: "558af720d1a83df5088daec078428c689fe95a5e71a61e7c6b36ca73bcbd321f"
	I0920 19:06:15.467782  653977 cri.go:89] found id: ""
	I0920 19:06:15.467788  653977 logs.go:276] 2 containers: [78dbadd374e4006947e4c0a4b40642cfe3d16108cbdaad7fa8a335c2eadfccc6 558af720d1a83df5088daec078428c689fe95a5e71a61e7c6b36ca73bcbd321f]
	I0920 19:06:15.467844  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:15.473338  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:15.476870  653977 logs.go:123] Gathering logs for kube-proxy [899a291eb59c3b5f9d6b0939098b577edab6865605118eb9e26163895e023c2d] ...
	I0920 19:06:15.476944  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 899a291eb59c3b5f9d6b0939098b577edab6865605118eb9e26163895e023c2d"
	I0920 19:06:15.522385  653977 logs.go:123] Gathering logs for kindnet [ad9741225d44ef3cd93a867e4aedc84a8ed23877c1b05cd8229cfb1260ced7de] ...
	I0920 19:06:15.522416  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad9741225d44ef3cd93a867e4aedc84a8ed23877c1b05cd8229cfb1260ced7de"
	I0920 19:06:15.579079  653977 logs.go:123] Gathering logs for kindnet [6451e639fa627b6f7eb0c0d6d43b8de7a1c9297b75f90eba2d66578a806ba13c] ...
	I0920 19:06:15.579113  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6451e639fa627b6f7eb0c0d6d43b8de7a1c9297b75f90eba2d66578a806ba13c"
	I0920 19:06:15.628786  653977 logs.go:123] Gathering logs for etcd [ffeafa6ea90467e9aee01c15062a10d7e7d8deb2522a37b559544d399360d4c5] ...
	I0920 19:06:15.628819  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffeafa6ea90467e9aee01c15062a10d7e7d8deb2522a37b559544d399360d4c5"
	I0920 19:06:15.687528  653977 logs.go:123] Gathering logs for kube-proxy [e7bf621eab17a86a9873dcbf28cace6f79a68a72ffa3d75afc890ecc389a86ec] ...
	I0920 19:06:15.687557  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bf621eab17a86a9873dcbf28cace6f79a68a72ffa3d75afc890ecc389a86ec"
	I0920 19:06:15.736648  653977 logs.go:123] Gathering logs for coredns [4142a2b627abf853d590ea7ce75f7deda0b722cae15d402d8283d5eba06bcada] ...
	I0920 19:06:15.736676  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4142a2b627abf853d590ea7ce75f7deda0b722cae15d402d8283d5eba06bcada"
	I0920 19:06:15.783007  653977 logs.go:123] Gathering logs for kube-scheduler [2097a062acf828599a57c98a30c658af273f5825500a3bf75e77de707c622677] ...
	I0920 19:06:15.783038  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2097a062acf828599a57c98a30c658af273f5825500a3bf75e77de707c622677"
	I0920 19:06:15.829656  653977 logs.go:123] Gathering logs for kube-scheduler [7407be95357aafd1f43bd30ec343dcf522e2b4e01f1263ebcbc21b337b0c8043] ...
	I0920 19:06:15.829684  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7407be95357aafd1f43bd30ec343dcf522e2b4e01f1263ebcbc21b337b0c8043"
	I0920 19:06:15.878134  653977 logs.go:123] Gathering logs for kube-controller-manager [dc58bda86f6344646d9480f42703769ac45b07cf183c47ae95ac99cf89959d32] ...
	I0920 19:06:15.878172  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc58bda86f6344646d9480f42703769ac45b07cf183c47ae95ac99cf89959d32"
	I0920 19:06:15.946804  653977 logs.go:123] Gathering logs for storage-provisioner [78dbadd374e4006947e4c0a4b40642cfe3d16108cbdaad7fa8a335c2eadfccc6] ...
	I0920 19:06:15.946844  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78dbadd374e4006947e4c0a4b40642cfe3d16108cbdaad7fa8a335c2eadfccc6"
	I0920 19:06:15.988529  653977 logs.go:123] Gathering logs for storage-provisioner [558af720d1a83df5088daec078428c689fe95a5e71a61e7c6b36ca73bcbd321f] ...
	I0920 19:06:15.988560  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 558af720d1a83df5088daec078428c689fe95a5e71a61e7c6b36ca73bcbd321f"
	I0920 19:06:16.034644  653977 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:16.034680  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 19:06:16.100049  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302392     664 reflector.go:138] object-"kube-system"/"kindnet-token-ll598": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ll598" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:16.100305  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302467     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-5rgk7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-5rgk7" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:16.100627  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302516     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-2m84d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2m84d" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:16.100849  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302566     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:16.101064  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302642     664 reflector.go:138] object-"kube-system"/"coredns-token-tqf8k": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-tqf8k" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:16.101268  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302685     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:16.101492  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302728     664 reflector.go:138] object-"kube-system"/"metrics-server-token-fk86c": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fk86c" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:16.101705  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302769     664 reflector.go:138] object-"default"/"default-token-gqwtk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gqwtk" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:16.110802  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:56 old-k8s-version-809747 kubelet[664]: E0920 19:00:56.277740     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 19:06:16.111004  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:57 old-k8s-version-809747 kubelet[664]: E0920 19:00:57.075859     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.115127  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:08 old-k8s-version-809747 kubelet[664]: E0920 19:01:08.853318     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 19:06:16.117924  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:19 old-k8s-version-809747 kubelet[664]: E0920 19:01:19.861009     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.121173  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:22 old-k8s-version-809747 kubelet[664]: E0920 19:01:22.209550     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.121986  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:23 old-k8s-version-809747 kubelet[664]: E0920 19:01:23.208449     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.122473  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:24 old-k8s-version-809747 kubelet[664]: E0920 19:01:24.374198     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.123188  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:25 old-k8s-version-809747 kubelet[664]: E0920 19:01:25.231977     664 pod_workers.go:191] Error syncing pod fd544a9f-bcfa-4019-9bf3-71e1e9a2d2ae ("storage-provisioner_kube-system(fd544a9f-bcfa-4019-9bf3-71e1e9a2d2ae)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(fd544a9f-bcfa-4019-9bf3-71e1e9a2d2ae)"
	W0920 19:06:16.126691  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:30 old-k8s-version-809747 kubelet[664]: E0920 19:01:30.851954     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 19:06:16.128767  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:38 old-k8s-version-809747 kubelet[664]: E0920 19:01:38.279749     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.129446  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:42 old-k8s-version-809747 kubelet[664]: E0920 19:01:42.842501     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.129840  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:44 old-k8s-version-809747 kubelet[664]: E0920 19:01:44.374858     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.134573  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:55 old-k8s-version-809747 kubelet[664]: E0920 19:01:55.842879     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.134786  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:56 old-k8s-version-809747 kubelet[664]: E0920 19:01:56.842908     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.135111  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:08 old-k8s-version-809747 kubelet[664]: E0920 19:02:08.844439     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.135580  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:10 old-k8s-version-809747 kubelet[664]: E0920 19:02:10.389468     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.135919  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:14 old-k8s-version-809747 kubelet[664]: E0920 19:02:14.374714     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.138430  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:20 old-k8s-version-809747 kubelet[664]: E0920 19:02:20.851952     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 19:06:16.138769  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:26 old-k8s-version-809747 kubelet[664]: E0920 19:02:26.841902     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.138962  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:31 old-k8s-version-809747 kubelet[664]: E0920 19:02:31.842985     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.139296  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:41 old-k8s-version-809747 kubelet[664]: E0920 19:02:41.843033     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.139484  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:44 old-k8s-version-809747 kubelet[664]: E0920 19:02:44.846601     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.140079  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:54 old-k8s-version-809747 kubelet[664]: E0920 19:02:54.515666     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.140266  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:59 old-k8s-version-809747 kubelet[664]: E0920 19:02:59.842434     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.140598  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:04 old-k8s-version-809747 kubelet[664]: E0920 19:03:04.374783     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.140786  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:11 old-k8s-version-809747 kubelet[664]: E0920 19:03:11.842669     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.141116  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:16 old-k8s-version-809747 kubelet[664]: E0920 19:03:16.841791     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.141304  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:22 old-k8s-version-809747 kubelet[664]: E0920 19:03:22.842284     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.141636  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:28 old-k8s-version-809747 kubelet[664]: E0920 19:03:28.842043     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.141822  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:33 old-k8s-version-809747 kubelet[664]: E0920 19:03:33.843004     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.142154  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:42 old-k8s-version-809747 kubelet[664]: E0920 19:03:42.841902     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.144722  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:48 old-k8s-version-809747 kubelet[664]: E0920 19:03:48.851175     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 19:06:16.145056  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:54 old-k8s-version-809747 kubelet[664]: E0920 19:03:54.841890     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.145247  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:01 old-k8s-version-809747 kubelet[664]: E0920 19:04:01.843066     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.145579  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:08 old-k8s-version-809747 kubelet[664]: E0920 19:04:08.841968     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.145765  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:15 old-k8s-version-809747 kubelet[664]: E0920 19:04:15.842571     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.146365  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:21 old-k8s-version-809747 kubelet[664]: E0920 19:04:21.744371     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.146700  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:24 old-k8s-version-809747 kubelet[664]: E0920 19:04:24.377802     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.146902  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:29 old-k8s-version-809747 kubelet[664]: E0920 19:04:29.842421     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.147233  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:36 old-k8s-version-809747 kubelet[664]: E0920 19:04:36.841921     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.147420  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:44 old-k8s-version-809747 kubelet[664]: E0920 19:04:44.842363     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.147756  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:51 old-k8s-version-809747 kubelet[664]: E0920 19:04:51.843005     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.147942  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:59 old-k8s-version-809747 kubelet[664]: E0920 19:04:59.842361     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.148272  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:02 old-k8s-version-809747 kubelet[664]: E0920 19:05:02.842297     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.148603  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:13 old-k8s-version-809747 kubelet[664]: E0920 19:05:13.842057     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.148791  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:14 old-k8s-version-809747 kubelet[664]: E0920 19:05:14.842283     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.149129  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:24 old-k8s-version-809747 kubelet[664]: E0920 19:05:24.842121     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.149319  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:28 old-k8s-version-809747 kubelet[664]: E0920 19:05:28.842363     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.149649  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:37 old-k8s-version-809747 kubelet[664]: E0920 19:05:37.842390     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.149834  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:43 old-k8s-version-809747 kubelet[664]: E0920 19:05:43.842490     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.150164  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:50 old-k8s-version-809747 kubelet[664]: E0920 19:05:50.841875     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.150358  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:58 old-k8s-version-809747 kubelet[664]: E0920 19:05:58.842395     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.150691  653977 logs.go:138] Found kubelet problem: Sep 20 19:06:04 old-k8s-version-809747 kubelet[664]: E0920 19:06:04.841931     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.150880  653977 logs.go:138] Found kubelet problem: Sep 20 19:06:09 old-k8s-version-809747 kubelet[664]: E0920 19:06:09.842365     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0920 19:06:16.150890  653977 logs.go:123] Gathering logs for kube-apiserver [d106efe91d320b3b45d7444b13bc8682a69b480fd3d2c46ef29ca1522cc0dba7] ...
	I0920 19:06:16.150905  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106efe91d320b3b45d7444b13bc8682a69b480fd3d2c46ef29ca1522cc0dba7"
	I0920 19:06:16.225582  653977 logs.go:123] Gathering logs for containerd ...
	I0920 19:06:16.225634  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0920 19:06:16.292587  653977 logs.go:123] Gathering logs for coredns [812bd46cee424729eb0e399a232c5b3f90764d593c9aca7c0f74a4afe099a327] ...
	I0920 19:06:16.292628  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 812bd46cee424729eb0e399a232c5b3f90764d593c9aca7c0f74a4afe099a327"
	I0920 19:06:16.337773  653977 logs.go:123] Gathering logs for kube-controller-manager [fc97c68baab83ca4a8e34d1add491da97117d8436842e907b6c7ef5194a66548] ...
	I0920 19:06:16.337804  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc97c68baab83ca4a8e34d1add491da97117d8436842e907b6c7ef5194a66548"
	I0920 19:06:16.402127  653977 logs.go:123] Gathering logs for kubernetes-dashboard [a29e29bd825c67d29d71f1b611800e16571bd7e880bbb8077e23f09e4d6b05fd] ...
	I0920 19:06:16.402169  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a29e29bd825c67d29d71f1b611800e16571bd7e880bbb8077e23f09e4d6b05fd"
	I0920 19:06:16.452253  653977 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:16.452283  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:16.471492  653977 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:16.471570  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:06:16.635700  653977 logs.go:123] Gathering logs for container status ...
	I0920 19:06:16.635739  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:16.681697  653977 logs.go:123] Gathering logs for kube-apiserver [fb7f2e4033f4a4873ebbbd7a1613734b2ca70f9f331577110431f7b00029efa4] ...
	I0920 19:06:16.681730  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f2e4033f4a4873ebbbd7a1613734b2ca70f9f331577110431f7b00029efa4"
	I0920 19:06:16.779845  653977 logs.go:123] Gathering logs for etcd [085be7ace136b801ec32172b7f7a4a18032b49fc30e4fc31c9e16a3aeb8fcf70] ...
	I0920 19:06:16.779931  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 085be7ace136b801ec32172b7f7a4a18032b49fc30e4fc31c9e16a3aeb8fcf70"
	I0920 19:06:16.833254  653977 out.go:358] Setting ErrFile to fd 2...
	I0920 19:06:16.833407  653977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 19:06:16.833496  653977 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 19:06:16.833542  653977 out.go:270]   Sep 20 19:05:43 old-k8s-version-809747 kubelet[664]: E0920 19:05:43.842490     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 20 19:05:43 old-k8s-version-809747 kubelet[664]: E0920 19:05:43.842490     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.833604  653977 out.go:270]   Sep 20 19:05:50 old-k8s-version-809747 kubelet[664]: E0920 19:05:50.841875     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	  Sep 20 19:05:50 old-k8s-version-809747 kubelet[664]: E0920 19:05:50.841875     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.833657  653977 out.go:270]   Sep 20 19:05:58 old-k8s-version-809747 kubelet[664]: E0920 19:05:58.842395     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 20 19:05:58 old-k8s-version-809747 kubelet[664]: E0920 19:05:58.842395     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:16.833695  653977 out.go:270]   Sep 20 19:06:04 old-k8s-version-809747 kubelet[664]: E0920 19:06:04.841931     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	  Sep 20 19:06:04 old-k8s-version-809747 kubelet[664]: E0920 19:06:04.841931     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:16.833727  653977 out.go:270]   Sep 20 19:06:09 old-k8s-version-809747 kubelet[664]: E0920 19:06:09.842365     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 20 19:06:09 old-k8s-version-809747 kubelet[664]: E0920 19:06:09.842365     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0920 19:06:16.833758  653977 out.go:358] Setting ErrFile to fd 2...
	I0920 19:06:16.833779  653977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:06:26.835053  653977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:26.849195  653977 api_server.go:72] duration metric: took 5m52.870971077s to wait for apiserver process to appear ...
	I0920 19:06:26.849217  653977 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:06:26.849253  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:26.849308  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:26.897788  653977 cri.go:89] found id: "d106efe91d320b3b45d7444b13bc8682a69b480fd3d2c46ef29ca1522cc0dba7"
	I0920 19:06:26.897808  653977 cri.go:89] found id: "fb7f2e4033f4a4873ebbbd7a1613734b2ca70f9f331577110431f7b00029efa4"
	I0920 19:06:26.897813  653977 cri.go:89] found id: ""
	I0920 19:06:26.897821  653977 logs.go:276] 2 containers: [d106efe91d320b3b45d7444b13bc8682a69b480fd3d2c46ef29ca1522cc0dba7 fb7f2e4033f4a4873ebbbd7a1613734b2ca70f9f331577110431f7b00029efa4]
	I0920 19:06:26.897878  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:26.902338  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:26.906548  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0920 19:06:26.906623  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:26.958378  653977 cri.go:89] found id: "085be7ace136b801ec32172b7f7a4a18032b49fc30e4fc31c9e16a3aeb8fcf70"
	I0920 19:06:26.958398  653977 cri.go:89] found id: "ffeafa6ea90467e9aee01c15062a10d7e7d8deb2522a37b559544d399360d4c5"
	I0920 19:06:26.958403  653977 cri.go:89] found id: ""
	I0920 19:06:26.958411  653977 logs.go:276] 2 containers: [085be7ace136b801ec32172b7f7a4a18032b49fc30e4fc31c9e16a3aeb8fcf70 ffeafa6ea90467e9aee01c15062a10d7e7d8deb2522a37b559544d399360d4c5]
	I0920 19:06:26.958470  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:26.962667  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:26.966536  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0920 19:06:26.966653  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:27.027915  653977 cri.go:89] found id: "812bd46cee424729eb0e399a232c5b3f90764d593c9aca7c0f74a4afe099a327"
	I0920 19:06:27.027936  653977 cri.go:89] found id: "4142a2b627abf853d590ea7ce75f7deda0b722cae15d402d8283d5eba06bcada"
	I0920 19:06:27.027941  653977 cri.go:89] found id: ""
	I0920 19:06:27.027948  653977 logs.go:276] 2 containers: [812bd46cee424729eb0e399a232c5b3f90764d593c9aca7c0f74a4afe099a327 4142a2b627abf853d590ea7ce75f7deda0b722cae15d402d8283d5eba06bcada]
	I0920 19:06:27.028013  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.033278  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.037758  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:27.037875  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:27.093653  653977 cri.go:89] found id: "2097a062acf828599a57c98a30c658af273f5825500a3bf75e77de707c622677"
	I0920 19:06:27.093726  653977 cri.go:89] found id: "7407be95357aafd1f43bd30ec343dcf522e2b4e01f1263ebcbc21b337b0c8043"
	I0920 19:06:27.093746  653977 cri.go:89] found id: ""
	I0920 19:06:27.093775  653977 logs.go:276] 2 containers: [2097a062acf828599a57c98a30c658af273f5825500a3bf75e77de707c622677 7407be95357aafd1f43bd30ec343dcf522e2b4e01f1263ebcbc21b337b0c8043]
	I0920 19:06:27.093857  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.098536  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.102887  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:27.103005  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:27.176029  653977 cri.go:89] found id: "e7bf621eab17a86a9873dcbf28cace6f79a68a72ffa3d75afc890ecc389a86ec"
	I0920 19:06:27.176091  653977 cri.go:89] found id: "899a291eb59c3b5f9d6b0939098b577edab6865605118eb9e26163895e023c2d"
	I0920 19:06:27.176123  653977 cri.go:89] found id: ""
	I0920 19:06:27.176152  653977 logs.go:276] 2 containers: [e7bf621eab17a86a9873dcbf28cace6f79a68a72ffa3d75afc890ecc389a86ec 899a291eb59c3b5f9d6b0939098b577edab6865605118eb9e26163895e023c2d]
	I0920 19:06:27.176226  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.181026  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.187926  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:27.188048  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:27.235096  653977 cri.go:89] found id: "fc97c68baab83ca4a8e34d1add491da97117d8436842e907b6c7ef5194a66548"
	I0920 19:06:27.235167  653977 cri.go:89] found id: "dc58bda86f6344646d9480f42703769ac45b07cf183c47ae95ac99cf89959d32"
	I0920 19:06:27.235187  653977 cri.go:89] found id: ""
	I0920 19:06:27.235212  653977 logs.go:276] 2 containers: [fc97c68baab83ca4a8e34d1add491da97117d8436842e907b6c7ef5194a66548 dc58bda86f6344646d9480f42703769ac45b07cf183c47ae95ac99cf89959d32]
	I0920 19:06:27.235293  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.239445  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.243444  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:27.243570  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:27.296352  653977 cri.go:89] found id: "ad9741225d44ef3cd93a867e4aedc84a8ed23877c1b05cd8229cfb1260ced7de"
	I0920 19:06:27.296445  653977 cri.go:89] found id: "6451e639fa627b6f7eb0c0d6d43b8de7a1c9297b75f90eba2d66578a806ba13c"
	I0920 19:06:27.296469  653977 cri.go:89] found id: ""
	I0920 19:06:27.296495  653977 logs.go:276] 2 containers: [ad9741225d44ef3cd93a867e4aedc84a8ed23877c1b05cd8229cfb1260ced7de 6451e639fa627b6f7eb0c0d6d43b8de7a1c9297b75f90eba2d66578a806ba13c]
	I0920 19:06:27.296576  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.300837  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.304814  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:27.304929  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:27.366905  653977 cri.go:89] found id: "a29e29bd825c67d29d71f1b611800e16571bd7e880bbb8077e23f09e4d6b05fd"
	I0920 19:06:27.366980  653977 cri.go:89] found id: ""
	I0920 19:06:27.367003  653977 logs.go:276] 1 containers: [a29e29bd825c67d29d71f1b611800e16571bd7e880bbb8077e23f09e4d6b05fd]
	I0920 19:06:27.367084  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.371726  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:06:27.371855  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:06:27.419397  653977 cri.go:89] found id: "78dbadd374e4006947e4c0a4b40642cfe3d16108cbdaad7fa8a335c2eadfccc6"
	I0920 19:06:27.419472  653977 cri.go:89] found id: "558af720d1a83df5088daec078428c689fe95a5e71a61e7c6b36ca73bcbd321f"
	I0920 19:06:27.419518  653977 cri.go:89] found id: ""
	I0920 19:06:27.419545  653977 logs.go:276] 2 containers: [78dbadd374e4006947e4c0a4b40642cfe3d16108cbdaad7fa8a335c2eadfccc6 558af720d1a83df5088daec078428c689fe95a5e71a61e7c6b36ca73bcbd321f]
	I0920 19:06:27.419629  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.423969  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.430101  653977 logs.go:123] Gathering logs for kube-proxy [e7bf621eab17a86a9873dcbf28cace6f79a68a72ffa3d75afc890ecc389a86ec] ...
	I0920 19:06:27.430174  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bf621eab17a86a9873dcbf28cace6f79a68a72ffa3d75afc890ecc389a86ec"
	I0920 19:06:27.488295  653977 logs.go:123] Gathering logs for kube-controller-manager [dc58bda86f6344646d9480f42703769ac45b07cf183c47ae95ac99cf89959d32] ...
	I0920 19:06:27.488370  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc58bda86f6344646d9480f42703769ac45b07cf183c47ae95ac99cf89959d32"
	I0920 19:06:27.567691  653977 logs.go:123] Gathering logs for kube-apiserver [d106efe91d320b3b45d7444b13bc8682a69b480fd3d2c46ef29ca1522cc0dba7] ...
	I0920 19:06:27.567769  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106efe91d320b3b45d7444b13bc8682a69b480fd3d2c46ef29ca1522cc0dba7"
	I0920 19:06:27.637441  653977 logs.go:123] Gathering logs for coredns [812bd46cee424729eb0e399a232c5b3f90764d593c9aca7c0f74a4afe099a327] ...
	I0920 19:06:27.637485  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 812bd46cee424729eb0e399a232c5b3f90764d593c9aca7c0f74a4afe099a327"
	I0920 19:06:27.712024  653977 logs.go:123] Gathering logs for coredns [4142a2b627abf853d590ea7ce75f7deda0b722cae15d402d8283d5eba06bcada] ...
	I0920 19:06:27.712059  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4142a2b627abf853d590ea7ce75f7deda0b722cae15d402d8283d5eba06bcada"
	I0920 19:06:27.761492  653977 logs.go:123] Gathering logs for kube-scheduler [2097a062acf828599a57c98a30c658af273f5825500a3bf75e77de707c622677] ...
	I0920 19:06:27.761548  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2097a062acf828599a57c98a30c658af273f5825500a3bf75e77de707c622677"
	I0920 19:06:27.815260  653977 logs.go:123] Gathering logs for kube-proxy [899a291eb59c3b5f9d6b0939098b577edab6865605118eb9e26163895e023c2d] ...
	I0920 19:06:27.815292  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 899a291eb59c3b5f9d6b0939098b577edab6865605118eb9e26163895e023c2d"
	I0920 19:06:27.868976  653977 logs.go:123] Gathering logs for kube-controller-manager [fc97c68baab83ca4a8e34d1add491da97117d8436842e907b6c7ef5194a66548] ...
	I0920 19:06:27.869008  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc97c68baab83ca4a8e34d1add491da97117d8436842e907b6c7ef5194a66548"
	I0920 19:06:27.955237  653977 logs.go:123] Gathering logs for kindnet [6451e639fa627b6f7eb0c0d6d43b8de7a1c9297b75f90eba2d66578a806ba13c] ...
	I0920 19:06:27.955275  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6451e639fa627b6f7eb0c0d6d43b8de7a1c9297b75f90eba2d66578a806ba13c"
	I0920 19:06:28.009037  653977 logs.go:123] Gathering logs for kubernetes-dashboard [a29e29bd825c67d29d71f1b611800e16571bd7e880bbb8077e23f09e4d6b05fd] ...
	I0920 19:06:28.009074  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a29e29bd825c67d29d71f1b611800e16571bd7e880bbb8077e23f09e4d6b05fd"
	I0920 19:06:28.059987  653977 logs.go:123] Gathering logs for kube-apiserver [fb7f2e4033f4a4873ebbbd7a1613734b2ca70f9f331577110431f7b00029efa4] ...
	I0920 19:06:28.060019  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f2e4033f4a4873ebbbd7a1613734b2ca70f9f331577110431f7b00029efa4"
	I0920 19:06:28.165966  653977 logs.go:123] Gathering logs for etcd [ffeafa6ea90467e9aee01c15062a10d7e7d8deb2522a37b559544d399360d4c5] ...
	I0920 19:06:28.166007  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffeafa6ea90467e9aee01c15062a10d7e7d8deb2522a37b559544d399360d4c5"
	I0920 19:06:28.221462  653977 logs.go:123] Gathering logs for container status ...
	I0920 19:06:28.221495  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:28.317786  653977 logs.go:123] Gathering logs for storage-provisioner [558af720d1a83df5088daec078428c689fe95a5e71a61e7c6b36ca73bcbd321f] ...
	I0920 19:06:28.317819  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 558af720d1a83df5088daec078428c689fe95a5e71a61e7c6b36ca73bcbd321f"
	I0920 19:06:28.363443  653977 logs.go:123] Gathering logs for containerd ...
	I0920 19:06:28.363472  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0920 19:06:28.429304  653977 logs.go:123] Gathering logs for kube-scheduler [7407be95357aafd1f43bd30ec343dcf522e2b4e01f1263ebcbc21b337b0c8043] ...
	I0920 19:06:28.429342  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7407be95357aafd1f43bd30ec343dcf522e2b4e01f1263ebcbc21b337b0c8043"
	I0920 19:06:28.511550  653977 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:28.511585  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:28.530657  653977 logs.go:123] Gathering logs for etcd [085be7ace136b801ec32172b7f7a4a18032b49fc30e4fc31c9e16a3aeb8fcf70] ...
	I0920 19:06:28.530687  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 085be7ace136b801ec32172b7f7a4a18032b49fc30e4fc31c9e16a3aeb8fcf70"
	I0920 19:06:28.612728  653977 logs.go:123] Gathering logs for kindnet [ad9741225d44ef3cd93a867e4aedc84a8ed23877c1b05cd8229cfb1260ced7de] ...
	I0920 19:06:28.612765  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad9741225d44ef3cd93a867e4aedc84a8ed23877c1b05cd8229cfb1260ced7de"
	I0920 19:06:28.688022  653977 logs.go:123] Gathering logs for storage-provisioner [78dbadd374e4006947e4c0a4b40642cfe3d16108cbdaad7fa8a335c2eadfccc6] ...
	I0920 19:06:28.688058  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78dbadd374e4006947e4c0a4b40642cfe3d16108cbdaad7fa8a335c2eadfccc6"
	I0920 19:06:28.738857  653977 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:28.738886  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 19:06:28.797990  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302392     664 reflector.go:138] object-"kube-system"/"kindnet-token-ll598": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ll598" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:28.798246  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302467     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-5rgk7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-5rgk7" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:28.798535  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302516     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-2m84d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2m84d" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:28.798750  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302566     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:28.798966  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302642     664 reflector.go:138] object-"kube-system"/"coredns-token-tqf8k": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-tqf8k" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:28.799170  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302685     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:28.799394  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302728     664 reflector.go:138] object-"kube-system"/"metrics-server-token-fk86c": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fk86c" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:28.799604  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302769     664 reflector.go:138] object-"default"/"default-token-gqwtk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gqwtk" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:28.808631  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:56 old-k8s-version-809747 kubelet[664]: E0920 19:00:56.277740     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 19:06:28.808831  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:57 old-k8s-version-809747 kubelet[664]: E0920 19:00:57.075859     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.811689  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:08 old-k8s-version-809747 kubelet[664]: E0920 19:01:08.853318     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 19:06:28.813374  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:19 old-k8s-version-809747 kubelet[664]: E0920 19:01:19.861009     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.814320  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:22 old-k8s-version-809747 kubelet[664]: E0920 19:01:22.209550     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.814660  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:23 old-k8s-version-809747 kubelet[664]: E0920 19:01:23.208449     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.814996  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:24 old-k8s-version-809747 kubelet[664]: E0920 19:01:24.374198     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.815437  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:25 old-k8s-version-809747 kubelet[664]: E0920 19:01:25.231977     664 pod_workers.go:191] Error syncing pod fd544a9f-bcfa-4019-9bf3-71e1e9a2d2ae ("storage-provisioner_kube-system(fd544a9f-bcfa-4019-9bf3-71e1e9a2d2ae)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(fd544a9f-bcfa-4019-9bf3-71e1e9a2d2ae)"
	W0920 19:06:28.817920  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:30 old-k8s-version-809747 kubelet[664]: E0920 19:01:30.851954     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 19:06:28.819220  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:38 old-k8s-version-809747 kubelet[664]: E0920 19:01:38.279749     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.819409  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:42 old-k8s-version-809747 kubelet[664]: E0920 19:01:42.842501     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.819748  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:44 old-k8s-version-809747 kubelet[664]: E0920 19:01:44.374858     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.820076  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:55 old-k8s-version-809747 kubelet[664]: E0920 19:01:55.842879     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.820260  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:56 old-k8s-version-809747 kubelet[664]: E0920 19:01:56.842908     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.820577  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:08 old-k8s-version-809747 kubelet[664]: E0920 19:02:08.844439     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.821043  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:10 old-k8s-version-809747 kubelet[664]: E0920 19:02:10.389468     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.821373  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:14 old-k8s-version-809747 kubelet[664]: E0920 19:02:14.374714     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.823922  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:20 old-k8s-version-809747 kubelet[664]: E0920 19:02:20.851952     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 19:06:28.824256  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:26 old-k8s-version-809747 kubelet[664]: E0920 19:02:26.841902     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.824495  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:31 old-k8s-version-809747 kubelet[664]: E0920 19:02:31.842985     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.824826  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:41 old-k8s-version-809747 kubelet[664]: E0920 19:02:41.843033     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.825012  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:44 old-k8s-version-809747 kubelet[664]: E0920 19:02:44.846601     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.825614  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:54 old-k8s-version-809747 kubelet[664]: E0920 19:02:54.515666     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.825802  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:59 old-k8s-version-809747 kubelet[664]: E0920 19:02:59.842434     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.826131  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:04 old-k8s-version-809747 kubelet[664]: E0920 19:03:04.374783     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.826352  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:11 old-k8s-version-809747 kubelet[664]: E0920 19:03:11.842669     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.826681  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:16 old-k8s-version-809747 kubelet[664]: E0920 19:03:16.841791     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.826866  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:22 old-k8s-version-809747 kubelet[664]: E0920 19:03:22.842284     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.827271  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:28 old-k8s-version-809747 kubelet[664]: E0920 19:03:28.842043     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.827472  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:33 old-k8s-version-809747 kubelet[664]: E0920 19:03:33.843004     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.827805  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:42 old-k8s-version-809747 kubelet[664]: E0920 19:03:42.841902     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.830249  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:48 old-k8s-version-809747 kubelet[664]: E0920 19:03:48.851175     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 19:06:28.830593  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:54 old-k8s-version-809747 kubelet[664]: E0920 19:03:54.841890     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.830778  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:01 old-k8s-version-809747 kubelet[664]: E0920 19:04:01.843066     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.831103  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:08 old-k8s-version-809747 kubelet[664]: E0920 19:04:08.841968     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.831288  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:15 old-k8s-version-809747 kubelet[664]: E0920 19:04:15.842571     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.831935  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:21 old-k8s-version-809747 kubelet[664]: E0920 19:04:21.744371     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.832267  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:24 old-k8s-version-809747 kubelet[664]: E0920 19:04:24.377802     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.832453  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:29 old-k8s-version-809747 kubelet[664]: E0920 19:04:29.842421     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.832781  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:36 old-k8s-version-809747 kubelet[664]: E0920 19:04:36.841921     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.832965  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:44 old-k8s-version-809747 kubelet[664]: E0920 19:04:44.842363     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.833331  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:51 old-k8s-version-809747 kubelet[664]: E0920 19:04:51.843005     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.833519  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:59 old-k8s-version-809747 kubelet[664]: E0920 19:04:59.842361     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.833846  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:02 old-k8s-version-809747 kubelet[664]: E0920 19:05:02.842297     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.834176  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:13 old-k8s-version-809747 kubelet[664]: E0920 19:05:13.842057     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.834367  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:14 old-k8s-version-809747 kubelet[664]: E0920 19:05:14.842283     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.834699  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:24 old-k8s-version-809747 kubelet[664]: E0920 19:05:24.842121     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.834882  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:28 old-k8s-version-809747 kubelet[664]: E0920 19:05:28.842363     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.835207  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:37 old-k8s-version-809747 kubelet[664]: E0920 19:05:37.842390     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.835394  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:43 old-k8s-version-809747 kubelet[664]: E0920 19:05:43.842490     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.835726  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:50 old-k8s-version-809747 kubelet[664]: E0920 19:05:50.841875     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.835911  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:58 old-k8s-version-809747 kubelet[664]: E0920 19:05:58.842395     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.836240  653977 logs.go:138] Found kubelet problem: Sep 20 19:06:04 old-k8s-version-809747 kubelet[664]: E0920 19:06:04.841931     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.836424  653977 logs.go:138] Found kubelet problem: Sep 20 19:06:09 old-k8s-version-809747 kubelet[664]: E0920 19:06:09.842365     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.836754  653977 logs.go:138] Found kubelet problem: Sep 20 19:06:17 old-k8s-version-809747 kubelet[664]: E0920 19:06:17.842782     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.836938  653977 logs.go:138] Found kubelet problem: Sep 20 19:06:23 old-k8s-version-809747 kubelet[664]: E0920 19:06:23.842473     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0920 19:06:28.836948  653977 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:28.836964  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:06:29.018737  653977 out.go:358] Setting ErrFile to fd 2...
	I0920 19:06:29.018770  653977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 19:06:29.018819  653977 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 19:06:29.018834  653977 out.go:270]   Sep 20 19:05:58 old-k8s-version-809747 kubelet[664]: E0920 19:05:58.842395     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 20 19:05:58 old-k8s-version-809747 kubelet[664]: E0920 19:05:58.842395     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:29.018842  653977 out.go:270]   Sep 20 19:06:04 old-k8s-version-809747 kubelet[664]: E0920 19:06:04.841931     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	  Sep 20 19:06:04 old-k8s-version-809747 kubelet[664]: E0920 19:06:04.841931     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:29.018857  653977 out.go:270]   Sep 20 19:06:09 old-k8s-version-809747 kubelet[664]: E0920 19:06:09.842365     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 20 19:06:09 old-k8s-version-809747 kubelet[664]: E0920 19:06:09.842365     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:29.018863  653977 out.go:270]   Sep 20 19:06:17 old-k8s-version-809747 kubelet[664]: E0920 19:06:17.842782     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	  Sep 20 19:06:17 old-k8s-version-809747 kubelet[664]: E0920 19:06:17.842782     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:29.018869  653977 out.go:270]   Sep 20 19:06:23 old-k8s-version-809747 kubelet[664]: E0920 19:06:23.842473     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 20 19:06:23 old-k8s-version-809747 kubelet[664]: E0920 19:06:23.842473     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0920 19:06:29.018875  653977 out.go:358] Setting ErrFile to fd 2...
	I0920 19:06:29.018889  653977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:06:39.019670  653977 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0920 19:06:39.029993  653977 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0920 19:06:39.032174  653977 out.go:201] 
	W0920 19:06:39.034164  653977 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0920 19:06:39.034205  653977 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0920 19:06:39.034237  653977 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0920 19:06:39.034250  653977 out.go:270] * 
	* 
	W0920 19:06:39.035133  653977 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 19:06:39.038378  653977 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-809747 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-809747
helpers_test.go:235: (dbg) docker inspect old-k8s-version-809747:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1cd1d88cd1bb5c3b9f55eece743bd024009a64cbd1190993de0e8ccae96ae693",
	        "Created": "2024-09-20T18:57:26.973312319Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 654182,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T19:00:25.566123858Z",
	            "FinishedAt": "2024-09-20T19:00:24.232284705Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/1cd1d88cd1bb5c3b9f55eece743bd024009a64cbd1190993de0e8ccae96ae693/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1cd1d88cd1bb5c3b9f55eece743bd024009a64cbd1190993de0e8ccae96ae693/hostname",
	        "HostsPath": "/var/lib/docker/containers/1cd1d88cd1bb5c3b9f55eece743bd024009a64cbd1190993de0e8ccae96ae693/hosts",
	        "LogPath": "/var/lib/docker/containers/1cd1d88cd1bb5c3b9f55eece743bd024009a64cbd1190993de0e8ccae96ae693/1cd1d88cd1bb5c3b9f55eece743bd024009a64cbd1190993de0e8ccae96ae693-json.log",
	        "Name": "/old-k8s-version-809747",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-809747:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-809747",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/961870f7bf9df920505510a2445372c668a086820841b60d6db77a5ff75531c4-init/diff:/var/lib/docker/overlay2/3aa0f15c41477a99e99dc1a77b5fdd60c51e1433d51cff06d0a41fe51ac2c7c3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/961870f7bf9df920505510a2445372c668a086820841b60d6db77a5ff75531c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/961870f7bf9df920505510a2445372c668a086820841b60d6db77a5ff75531c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/961870f7bf9df920505510a2445372c668a086820841b60d6db77a5ff75531c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-809747",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-809747/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-809747",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-809747",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-809747",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bb4a6ab0d0c1ca3e076d177b990f1af0626b75394d43bae7cb12b213c53f4ae6",
	            "SandboxKey": "/var/run/docker/netns/bb4a6ab0d0c1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-809747": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d24ef9c80fc9fb9721b21336a3a800d8be9a6d5cab6fa0741ec759024df649cc",
	                    "EndpointID": "f50f6dfb338d4f827c1d6760366eb7c3ca7333bf4c085bae72e42d3a74e8bd57",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-809747",
	                        "1cd1d88cd1bb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-809747 -n old-k8s-version-809747
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-809747 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-809747 logs -n 25: (2.595011753s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-735719                              | cert-expiration-735719   | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-042522                               | force-systemd-env-042522 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-042522                            | force-systemd-env-042522 | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:56 UTC |
	| start   | -p cert-options-257492                                 | cert-options-257492      | jenkins | v1.34.0 | 20 Sep 24 18:56 UTC | 20 Sep 24 18:57 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-257492 ssh                                | cert-options-257492      | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-257492 -- sudo                         | cert-options-257492      | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-257492                                 | cert-options-257492      | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 18:57 UTC |
	| start   | -p old-k8s-version-809747                              | old-k8s-version-809747   | jenkins | v1.34.0 | 20 Sep 24 18:57 UTC | 20 Sep 24 19:00 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-735719                              | cert-expiration-735719   | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC | 20 Sep 24 18:59 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-735719                              | cert-expiration-735719   | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC | 20 Sep 24 18:59 UTC |
	| start   | -p no-preload-851913                                   | no-preload-851913        | jenkins | v1.34.0 | 20 Sep 24 18:59 UTC | 20 Sep 24 19:01 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-809747        | old-k8s-version-809747   | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-809747                              | old-k8s-version-809747   | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:00 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-809747             | old-k8s-version-809747   | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC | 20 Sep 24 19:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-809747                              | old-k8s-version-809747   | jenkins | v1.34.0 | 20 Sep 24 19:00 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-851913             | no-preload-851913        | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-851913                                   | no-preload-851913        | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-851913                  | no-preload-851913        | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-851913                                   | no-preload-851913        | jenkins | v1.34.0 | 20 Sep 24 19:01 UTC | 20 Sep 24 19:06 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| image   | no-preload-851913 image list                           | no-preload-851913        | jenkins | v1.34.0 | 20 Sep 24 19:06 UTC | 20 Sep 24 19:06 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-851913                                   | no-preload-851913        | jenkins | v1.34.0 | 20 Sep 24 19:06 UTC | 20 Sep 24 19:06 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-851913                                   | no-preload-851913        | jenkins | v1.34.0 | 20 Sep 24 19:06 UTC | 20 Sep 24 19:06 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-851913                                   | no-preload-851913        | jenkins | v1.34.0 | 20 Sep 24 19:06 UTC | 20 Sep 24 19:06 UTC |
	| delete  | -p no-preload-851913                                   | no-preload-851913        | jenkins | v1.34.0 | 20 Sep 24 19:06 UTC | 20 Sep 24 19:06 UTC |
	| start   | -p embed-certs-208780                                  | embed-certs-208780       | jenkins | v1.34.0 | 20 Sep 24 19:06 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:06:24
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:06:24.024120  664902 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:06:24.024270  664902 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:06:24.024283  664902 out.go:358] Setting ErrFile to fd 2...
	I0920 19:06:24.024288  664902 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:06:24.024585  664902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-440039/.minikube/bin
	I0920 19:06:24.025076  664902 out.go:352] Setting JSON to false
	I0920 19:06:24.028287  664902 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10135,"bootTime":1726849049,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 19:06:24.028380  664902 start.go:139] virtualization:  
	I0920 19:06:24.031266  664902 out.go:177] * [embed-certs-208780] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 19:06:24.034361  664902 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 19:06:24.034488  664902 notify.go:220] Checking for updates...
	I0920 19:06:24.038338  664902 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:06:24.040807  664902 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-440039/kubeconfig
	I0920 19:06:24.042982  664902 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-440039/.minikube
	I0920 19:06:24.045179  664902 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 19:06:24.047192  664902 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:06:24.049984  664902 config.go:182] Loaded profile config "old-k8s-version-809747": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0920 19:06:24.050179  664902 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:06:24.089167  664902 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:06:24.089390  664902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:06:24.156210  664902 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 19:06:24.142873751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:06:24.156324  664902 docker.go:318] overlay module found
	I0920 19:06:24.158639  664902 out.go:177] * Using the docker driver based on user configuration
	I0920 19:06:24.160998  664902 start.go:297] selected driver: docker
	I0920 19:06:24.161025  664902 start.go:901] validating driver "docker" against <nil>
	I0920 19:06:24.161040  664902 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:06:24.161745  664902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:06:24.245711  664902 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 19:06:24.235109654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:06:24.245906  664902 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 19:06:24.246125  664902 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:06:24.248524  664902 out.go:177] * Using Docker driver with root privileges
	I0920 19:06:24.250629  664902 cni.go:84] Creating CNI manager for ""
	I0920 19:06:24.250700  664902 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 19:06:24.250715  664902 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 19:06:24.250801  664902 start.go:340] cluster config:
	{Name:embed-certs-208780 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-208780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:06:24.253504  664902 out.go:177] * Starting "embed-certs-208780" primary control-plane node in "embed-certs-208780" cluster
	I0920 19:06:24.255960  664902 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0920 19:06:24.258450  664902 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 19:06:24.260698  664902 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 19:06:24.260757  664902 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-440039/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0920 19:06:24.260767  664902 cache.go:56] Caching tarball of preloaded images
	I0920 19:06:24.260859  664902 preload.go:172] Found /home/jenkins/minikube-integration/19679-440039/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 19:06:24.260867  664902 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0920 19:06:24.260979  664902 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/config.json ...
	I0920 19:06:24.260996  664902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/config.json: {Name:mk8dc3839c5a5d6438c6e752bef073719468e07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:06:24.261085  664902 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	W0920 19:06:24.282706  664902 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 is of wrong architecture
	I0920 19:06:24.282730  664902 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:06:24.282830  664902 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 19:06:24.282854  664902 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 19:06:24.282860  664902 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 19:06:24.282868  664902 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 19:06:24.282877  664902 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 19:06:24.415846  664902 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 19:06:24.415891  664902 cache.go:194] Successfully downloaded all kic artifacts
	I0920 19:06:24.415921  664902 start.go:360] acquireMachinesLock for embed-certs-208780: {Name:mke615d6ad4ebf4a07dd1444d21acc29cd55def8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:06:24.416532  664902 start.go:364] duration metric: took 586.007µs to acquireMachinesLock for "embed-certs-208780"
	I0920 19:06:24.416571  664902 start.go:93] Provisioning new machine with config: &{Name:embed-certs-208780 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-208780 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0920 19:06:24.416666  664902 start.go:125] createHost starting for "" (driver="docker")
	I0920 19:06:24.419389  664902 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0920 19:06:24.419623  664902 start.go:159] libmachine.API.Create for "embed-certs-208780" (driver="docker")
	I0920 19:06:24.419655  664902 client.go:168] LocalClient.Create starting
	I0920 19:06:24.419729  664902 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca.pem
	I0920 19:06:24.419772  664902 main.go:141] libmachine: Decoding PEM data...
	I0920 19:06:24.419792  664902 main.go:141] libmachine: Parsing certificate...
	I0920 19:06:24.419848  664902 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-440039/.minikube/certs/cert.pem
	I0920 19:06:24.419876  664902 main.go:141] libmachine: Decoding PEM data...
	I0920 19:06:24.419892  664902 main.go:141] libmachine: Parsing certificate...
	I0920 19:06:24.420263  664902 cli_runner.go:164] Run: docker network inspect embed-certs-208780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 19:06:24.436364  664902 cli_runner.go:211] docker network inspect embed-certs-208780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 19:06:24.436465  664902 network_create.go:284] running [docker network inspect embed-certs-208780] to gather additional debugging logs...
	I0920 19:06:24.436485  664902 cli_runner.go:164] Run: docker network inspect embed-certs-208780
	W0920 19:06:24.451939  664902 cli_runner.go:211] docker network inspect embed-certs-208780 returned with exit code 1
	I0920 19:06:24.451981  664902 network_create.go:287] error running [docker network inspect embed-certs-208780]: docker network inspect embed-certs-208780: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-208780 not found
	I0920 19:06:24.451994  664902 network_create.go:289] output of [docker network inspect embed-certs-208780]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-208780 not found
	
	** /stderr **
	I0920 19:06:24.452127  664902 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 19:06:24.469274  664902 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc05dbef80c5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d9:4b:c6:6a} reservation:<nil>}
	I0920 19:06:24.469705  664902 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fb0bc5b904d0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6a:ff:0d:2f} reservation:<nil>}
	I0920 19:06:24.470066  664902 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a2acba418518 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:6d:1c:20:df} reservation:<nil>}
	I0920 19:06:24.470562  664902 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d24ef9c80fc9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:d6:2f:3a:7a} reservation:<nil>}
	I0920 19:06:24.471075  664902 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400189eab0}
	I0920 19:06:24.471100  664902 network_create.go:124] attempt to create docker network embed-certs-208780 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0920 19:06:24.471160  664902 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-208780 embed-certs-208780
	I0920 19:06:24.548862  664902 network_create.go:108] docker network embed-certs-208780 192.168.85.0/24 created
	I0920 19:06:24.548899  664902 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-208780" container
	I0920 19:06:24.548988  664902 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 19:06:24.565515  664902 cli_runner.go:164] Run: docker volume create embed-certs-208780 --label name.minikube.sigs.k8s.io=embed-certs-208780 --label created_by.minikube.sigs.k8s.io=true
	I0920 19:06:24.584142  664902 oci.go:103] Successfully created a docker volume embed-certs-208780
	I0920 19:06:24.584232  664902 cli_runner.go:164] Run: docker run --rm --name embed-certs-208780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-208780 --entrypoint /usr/bin/test -v embed-certs-208780:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0920 19:06:25.241011  664902 oci.go:107] Successfully prepared a docker volume embed-certs-208780
	I0920 19:06:25.241080  664902 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 19:06:25.241102  664902 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 19:06:25.241185  664902 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19679-440039/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-208780:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 19:06:26.835053  653977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:06:26.849195  653977 api_server.go:72] duration metric: took 5m52.870971077s to wait for apiserver process to appear ...
	I0920 19:06:26.849217  653977 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:06:26.849253  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:06:26.849308  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:06:26.897788  653977 cri.go:89] found id: "d106efe91d320b3b45d7444b13bc8682a69b480fd3d2c46ef29ca1522cc0dba7"
	I0920 19:06:26.897808  653977 cri.go:89] found id: "fb7f2e4033f4a4873ebbbd7a1613734b2ca70f9f331577110431f7b00029efa4"
	I0920 19:06:26.897813  653977 cri.go:89] found id: ""
	I0920 19:06:26.897821  653977 logs.go:276] 2 containers: [d106efe91d320b3b45d7444b13bc8682a69b480fd3d2c46ef29ca1522cc0dba7 fb7f2e4033f4a4873ebbbd7a1613734b2ca70f9f331577110431f7b00029efa4]
	I0920 19:06:26.897878  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:26.902338  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:26.906548  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0920 19:06:26.906623  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:06:26.958378  653977 cri.go:89] found id: "085be7ace136b801ec32172b7f7a4a18032b49fc30e4fc31c9e16a3aeb8fcf70"
	I0920 19:06:26.958398  653977 cri.go:89] found id: "ffeafa6ea90467e9aee01c15062a10d7e7d8deb2522a37b559544d399360d4c5"
	I0920 19:06:26.958403  653977 cri.go:89] found id: ""
	I0920 19:06:26.958411  653977 logs.go:276] 2 containers: [085be7ace136b801ec32172b7f7a4a18032b49fc30e4fc31c9e16a3aeb8fcf70 ffeafa6ea90467e9aee01c15062a10d7e7d8deb2522a37b559544d399360d4c5]
	I0920 19:06:26.958470  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:26.962667  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:26.966536  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0920 19:06:26.966653  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:06:27.027915  653977 cri.go:89] found id: "812bd46cee424729eb0e399a232c5b3f90764d593c9aca7c0f74a4afe099a327"
	I0920 19:06:27.027936  653977 cri.go:89] found id: "4142a2b627abf853d590ea7ce75f7deda0b722cae15d402d8283d5eba06bcada"
	I0920 19:06:27.027941  653977 cri.go:89] found id: ""
	I0920 19:06:27.027948  653977 logs.go:276] 2 containers: [812bd46cee424729eb0e399a232c5b3f90764d593c9aca7c0f74a4afe099a327 4142a2b627abf853d590ea7ce75f7deda0b722cae15d402d8283d5eba06bcada]
	I0920 19:06:27.028013  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.033278  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.037758  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:06:27.037875  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:06:27.093653  653977 cri.go:89] found id: "2097a062acf828599a57c98a30c658af273f5825500a3bf75e77de707c622677"
	I0920 19:06:27.093726  653977 cri.go:89] found id: "7407be95357aafd1f43bd30ec343dcf522e2b4e01f1263ebcbc21b337b0c8043"
	I0920 19:06:27.093746  653977 cri.go:89] found id: ""
	I0920 19:06:27.093775  653977 logs.go:276] 2 containers: [2097a062acf828599a57c98a30c658af273f5825500a3bf75e77de707c622677 7407be95357aafd1f43bd30ec343dcf522e2b4e01f1263ebcbc21b337b0c8043]
	I0920 19:06:27.093857  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.098536  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.102887  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:06:27.103005  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:06:27.176029  653977 cri.go:89] found id: "e7bf621eab17a86a9873dcbf28cace6f79a68a72ffa3d75afc890ecc389a86ec"
	I0920 19:06:27.176091  653977 cri.go:89] found id: "899a291eb59c3b5f9d6b0939098b577edab6865605118eb9e26163895e023c2d"
	I0920 19:06:27.176123  653977 cri.go:89] found id: ""
	I0920 19:06:27.176152  653977 logs.go:276] 2 containers: [e7bf621eab17a86a9873dcbf28cace6f79a68a72ffa3d75afc890ecc389a86ec 899a291eb59c3b5f9d6b0939098b577edab6865605118eb9e26163895e023c2d]
	I0920 19:06:27.176226  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.181026  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.187926  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:06:27.188048  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:06:27.235096  653977 cri.go:89] found id: "fc97c68baab83ca4a8e34d1add491da97117d8436842e907b6c7ef5194a66548"
	I0920 19:06:27.235167  653977 cri.go:89] found id: "dc58bda86f6344646d9480f42703769ac45b07cf183c47ae95ac99cf89959d32"
	I0920 19:06:27.235187  653977 cri.go:89] found id: ""
	I0920 19:06:27.235212  653977 logs.go:276] 2 containers: [fc97c68baab83ca4a8e34d1add491da97117d8436842e907b6c7ef5194a66548 dc58bda86f6344646d9480f42703769ac45b07cf183c47ae95ac99cf89959d32]
	I0920 19:06:27.235293  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.239445  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.243444  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0920 19:06:27.243570  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:06:27.296352  653977 cri.go:89] found id: "ad9741225d44ef3cd93a867e4aedc84a8ed23877c1b05cd8229cfb1260ced7de"
	I0920 19:06:27.296445  653977 cri.go:89] found id: "6451e639fa627b6f7eb0c0d6d43b8de7a1c9297b75f90eba2d66578a806ba13c"
	I0920 19:06:27.296469  653977 cri.go:89] found id: ""
	I0920 19:06:27.296495  653977 logs.go:276] 2 containers: [ad9741225d44ef3cd93a867e4aedc84a8ed23877c1b05cd8229cfb1260ced7de 6451e639fa627b6f7eb0c0d6d43b8de7a1c9297b75f90eba2d66578a806ba13c]
	I0920 19:06:27.296576  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.300837  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.304814  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 19:06:27.304929  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 19:06:27.366905  653977 cri.go:89] found id: "a29e29bd825c67d29d71f1b611800e16571bd7e880bbb8077e23f09e4d6b05fd"
	I0920 19:06:27.366980  653977 cri.go:89] found id: ""
	I0920 19:06:27.367003  653977 logs.go:276] 1 containers: [a29e29bd825c67d29d71f1b611800e16571bd7e880bbb8077e23f09e4d6b05fd]
	I0920 19:06:27.367084  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.371726  653977 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0920 19:06:27.371855  653977 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 19:06:27.419397  653977 cri.go:89] found id: "78dbadd374e4006947e4c0a4b40642cfe3d16108cbdaad7fa8a335c2eadfccc6"
	I0920 19:06:27.419472  653977 cri.go:89] found id: "558af720d1a83df5088daec078428c689fe95a5e71a61e7c6b36ca73bcbd321f"
	I0920 19:06:27.419518  653977 cri.go:89] found id: ""
	I0920 19:06:27.419545  653977 logs.go:276] 2 containers: [78dbadd374e4006947e4c0a4b40642cfe3d16108cbdaad7fa8a335c2eadfccc6 558af720d1a83df5088daec078428c689fe95a5e71a61e7c6b36ca73bcbd321f]
	I0920 19:06:27.419629  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.423969  653977 ssh_runner.go:195] Run: which crictl
	I0920 19:06:27.430101  653977 logs.go:123] Gathering logs for kube-proxy [e7bf621eab17a86a9873dcbf28cace6f79a68a72ffa3d75afc890ecc389a86ec] ...
	I0920 19:06:27.430174  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7bf621eab17a86a9873dcbf28cace6f79a68a72ffa3d75afc890ecc389a86ec"
	I0920 19:06:27.488295  653977 logs.go:123] Gathering logs for kube-controller-manager [dc58bda86f6344646d9480f42703769ac45b07cf183c47ae95ac99cf89959d32] ...
	I0920 19:06:27.488370  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc58bda86f6344646d9480f42703769ac45b07cf183c47ae95ac99cf89959d32"
	I0920 19:06:27.567691  653977 logs.go:123] Gathering logs for kube-apiserver [d106efe91d320b3b45d7444b13bc8682a69b480fd3d2c46ef29ca1522cc0dba7] ...
	I0920 19:06:27.567769  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106efe91d320b3b45d7444b13bc8682a69b480fd3d2c46ef29ca1522cc0dba7"
	I0920 19:06:27.637441  653977 logs.go:123] Gathering logs for coredns [812bd46cee424729eb0e399a232c5b3f90764d593c9aca7c0f74a4afe099a327] ...
	I0920 19:06:27.637485  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 812bd46cee424729eb0e399a232c5b3f90764d593c9aca7c0f74a4afe099a327"
	I0920 19:06:27.712024  653977 logs.go:123] Gathering logs for coredns [4142a2b627abf853d590ea7ce75f7deda0b722cae15d402d8283d5eba06bcada] ...
	I0920 19:06:27.712059  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4142a2b627abf853d590ea7ce75f7deda0b722cae15d402d8283d5eba06bcada"
	I0920 19:06:27.761492  653977 logs.go:123] Gathering logs for kube-scheduler [2097a062acf828599a57c98a30c658af273f5825500a3bf75e77de707c622677] ...
	I0920 19:06:27.761548  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2097a062acf828599a57c98a30c658af273f5825500a3bf75e77de707c622677"
	I0920 19:06:27.815260  653977 logs.go:123] Gathering logs for kube-proxy [899a291eb59c3b5f9d6b0939098b577edab6865605118eb9e26163895e023c2d] ...
	I0920 19:06:27.815292  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 899a291eb59c3b5f9d6b0939098b577edab6865605118eb9e26163895e023c2d"
	I0920 19:06:27.868976  653977 logs.go:123] Gathering logs for kube-controller-manager [fc97c68baab83ca4a8e34d1add491da97117d8436842e907b6c7ef5194a66548] ...
	I0920 19:06:27.869008  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc97c68baab83ca4a8e34d1add491da97117d8436842e907b6c7ef5194a66548"
	I0920 19:06:27.955237  653977 logs.go:123] Gathering logs for kindnet [6451e639fa627b6f7eb0c0d6d43b8de7a1c9297b75f90eba2d66578a806ba13c] ...
	I0920 19:06:27.955275  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6451e639fa627b6f7eb0c0d6d43b8de7a1c9297b75f90eba2d66578a806ba13c"
	I0920 19:06:28.009037  653977 logs.go:123] Gathering logs for kubernetes-dashboard [a29e29bd825c67d29d71f1b611800e16571bd7e880bbb8077e23f09e4d6b05fd] ...
	I0920 19:06:28.009074  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a29e29bd825c67d29d71f1b611800e16571bd7e880bbb8077e23f09e4d6b05fd"
	I0920 19:06:28.059987  653977 logs.go:123] Gathering logs for kube-apiserver [fb7f2e4033f4a4873ebbbd7a1613734b2ca70f9f331577110431f7b00029efa4] ...
	I0920 19:06:28.060019  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f2e4033f4a4873ebbbd7a1613734b2ca70f9f331577110431f7b00029efa4"
	I0920 19:06:28.165966  653977 logs.go:123] Gathering logs for etcd [ffeafa6ea90467e9aee01c15062a10d7e7d8deb2522a37b559544d399360d4c5] ...
	I0920 19:06:28.166007  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffeafa6ea90467e9aee01c15062a10d7e7d8deb2522a37b559544d399360d4c5"
	I0920 19:06:28.221462  653977 logs.go:123] Gathering logs for container status ...
	I0920 19:06:28.221495  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:06:28.317786  653977 logs.go:123] Gathering logs for storage-provisioner [558af720d1a83df5088daec078428c689fe95a5e71a61e7c6b36ca73bcbd321f] ...
	I0920 19:06:28.317819  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 558af720d1a83df5088daec078428c689fe95a5e71a61e7c6b36ca73bcbd321f"
	I0920 19:06:28.363443  653977 logs.go:123] Gathering logs for containerd ...
	I0920 19:06:28.363472  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0920 19:06:28.429304  653977 logs.go:123] Gathering logs for kube-scheduler [7407be95357aafd1f43bd30ec343dcf522e2b4e01f1263ebcbc21b337b0c8043] ...
	I0920 19:06:28.429342  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7407be95357aafd1f43bd30ec343dcf522e2b4e01f1263ebcbc21b337b0c8043"
	I0920 19:06:28.511550  653977 logs.go:123] Gathering logs for dmesg ...
	I0920 19:06:28.511585  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:06:28.530657  653977 logs.go:123] Gathering logs for etcd [085be7ace136b801ec32172b7f7a4a18032b49fc30e4fc31c9e16a3aeb8fcf70] ...
	I0920 19:06:28.530687  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 085be7ace136b801ec32172b7f7a4a18032b49fc30e4fc31c9e16a3aeb8fcf70"
	I0920 19:06:28.612728  653977 logs.go:123] Gathering logs for kindnet [ad9741225d44ef3cd93a867e4aedc84a8ed23877c1b05cd8229cfb1260ced7de] ...
	I0920 19:06:28.612765  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad9741225d44ef3cd93a867e4aedc84a8ed23877c1b05cd8229cfb1260ced7de"
	I0920 19:06:28.688022  653977 logs.go:123] Gathering logs for storage-provisioner [78dbadd374e4006947e4c0a4b40642cfe3d16108cbdaad7fa8a335c2eadfccc6] ...
	I0920 19:06:28.688058  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78dbadd374e4006947e4c0a4b40642cfe3d16108cbdaad7fa8a335c2eadfccc6"
	I0920 19:06:28.738857  653977 logs.go:123] Gathering logs for kubelet ...
	I0920 19:06:28.738886  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 19:06:28.797990  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302392     664 reflector.go:138] object-"kube-system"/"kindnet-token-ll598": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ll598" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:28.798246  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302467     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-5rgk7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-5rgk7" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:28.798535  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302516     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-2m84d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2m84d" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:28.798750  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302566     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:28.798966  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302642     664 reflector.go:138] object-"kube-system"/"coredns-token-tqf8k": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-tqf8k" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:28.799170  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302685     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:28.799394  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302728     664 reflector.go:138] object-"kube-system"/"metrics-server-token-fk86c": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fk86c" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:28.799604  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:52 old-k8s-version-809747 kubelet[664]: E0920 19:00:52.302769     664 reflector.go:138] object-"default"/"default-token-gqwtk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gqwtk" is forbidden: User "system:node:old-k8s-version-809747" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-809747' and this object
	W0920 19:06:28.808631  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:56 old-k8s-version-809747 kubelet[664]: E0920 19:00:56.277740     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 19:06:28.808831  653977 logs.go:138] Found kubelet problem: Sep 20 19:00:57 old-k8s-version-809747 kubelet[664]: E0920 19:00:57.075859     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.811689  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:08 old-k8s-version-809747 kubelet[664]: E0920 19:01:08.853318     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 19:06:28.813374  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:19 old-k8s-version-809747 kubelet[664]: E0920 19:01:19.861009     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.814320  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:22 old-k8s-version-809747 kubelet[664]: E0920 19:01:22.209550     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.814660  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:23 old-k8s-version-809747 kubelet[664]: E0920 19:01:23.208449     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.814996  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:24 old-k8s-version-809747 kubelet[664]: E0920 19:01:24.374198     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.815437  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:25 old-k8s-version-809747 kubelet[664]: E0920 19:01:25.231977     664 pod_workers.go:191] Error syncing pod fd544a9f-bcfa-4019-9bf3-71e1e9a2d2ae ("storage-provisioner_kube-system(fd544a9f-bcfa-4019-9bf3-71e1e9a2d2ae)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(fd544a9f-bcfa-4019-9bf3-71e1e9a2d2ae)"
	W0920 19:06:28.817920  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:30 old-k8s-version-809747 kubelet[664]: E0920 19:01:30.851954     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 19:06:28.819220  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:38 old-k8s-version-809747 kubelet[664]: E0920 19:01:38.279749     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.819409  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:42 old-k8s-version-809747 kubelet[664]: E0920 19:01:42.842501     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.819748  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:44 old-k8s-version-809747 kubelet[664]: E0920 19:01:44.374858     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.820076  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:55 old-k8s-version-809747 kubelet[664]: E0920 19:01:55.842879     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.820260  653977 logs.go:138] Found kubelet problem: Sep 20 19:01:56 old-k8s-version-809747 kubelet[664]: E0920 19:01:56.842908     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.820577  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:08 old-k8s-version-809747 kubelet[664]: E0920 19:02:08.844439     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.821043  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:10 old-k8s-version-809747 kubelet[664]: E0920 19:02:10.389468     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.821373  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:14 old-k8s-version-809747 kubelet[664]: E0920 19:02:14.374714     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.823922  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:20 old-k8s-version-809747 kubelet[664]: E0920 19:02:20.851952     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 19:06:28.824256  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:26 old-k8s-version-809747 kubelet[664]: E0920 19:02:26.841902     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.824495  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:31 old-k8s-version-809747 kubelet[664]: E0920 19:02:31.842985     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.824826  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:41 old-k8s-version-809747 kubelet[664]: E0920 19:02:41.843033     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.825012  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:44 old-k8s-version-809747 kubelet[664]: E0920 19:02:44.846601     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.825614  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:54 old-k8s-version-809747 kubelet[664]: E0920 19:02:54.515666     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.825802  653977 logs.go:138] Found kubelet problem: Sep 20 19:02:59 old-k8s-version-809747 kubelet[664]: E0920 19:02:59.842434     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.826131  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:04 old-k8s-version-809747 kubelet[664]: E0920 19:03:04.374783     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.826352  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:11 old-k8s-version-809747 kubelet[664]: E0920 19:03:11.842669     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.826681  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:16 old-k8s-version-809747 kubelet[664]: E0920 19:03:16.841791     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.826866  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:22 old-k8s-version-809747 kubelet[664]: E0920 19:03:22.842284     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.827271  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:28 old-k8s-version-809747 kubelet[664]: E0920 19:03:28.842043     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.827472  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:33 old-k8s-version-809747 kubelet[664]: E0920 19:03:33.843004     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.827805  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:42 old-k8s-version-809747 kubelet[664]: E0920 19:03:42.841902     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.830249  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:48 old-k8s-version-809747 kubelet[664]: E0920 19:03:48.851175     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 19:06:28.830593  653977 logs.go:138] Found kubelet problem: Sep 20 19:03:54 old-k8s-version-809747 kubelet[664]: E0920 19:03:54.841890     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.830778  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:01 old-k8s-version-809747 kubelet[664]: E0920 19:04:01.843066     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.831103  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:08 old-k8s-version-809747 kubelet[664]: E0920 19:04:08.841968     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.831288  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:15 old-k8s-version-809747 kubelet[664]: E0920 19:04:15.842571     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.831935  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:21 old-k8s-version-809747 kubelet[664]: E0920 19:04:21.744371     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.832267  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:24 old-k8s-version-809747 kubelet[664]: E0920 19:04:24.377802     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.832453  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:29 old-k8s-version-809747 kubelet[664]: E0920 19:04:29.842421     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.832781  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:36 old-k8s-version-809747 kubelet[664]: E0920 19:04:36.841921     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.832965  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:44 old-k8s-version-809747 kubelet[664]: E0920 19:04:44.842363     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.833331  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:51 old-k8s-version-809747 kubelet[664]: E0920 19:04:51.843005     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.833519  653977 logs.go:138] Found kubelet problem: Sep 20 19:04:59 old-k8s-version-809747 kubelet[664]: E0920 19:04:59.842361     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.833846  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:02 old-k8s-version-809747 kubelet[664]: E0920 19:05:02.842297     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.834176  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:13 old-k8s-version-809747 kubelet[664]: E0920 19:05:13.842057     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.834367  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:14 old-k8s-version-809747 kubelet[664]: E0920 19:05:14.842283     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.834699  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:24 old-k8s-version-809747 kubelet[664]: E0920 19:05:24.842121     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.834882  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:28 old-k8s-version-809747 kubelet[664]: E0920 19:05:28.842363     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.835207  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:37 old-k8s-version-809747 kubelet[664]: E0920 19:05:37.842390     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.835394  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:43 old-k8s-version-809747 kubelet[664]: E0920 19:05:43.842490     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.835726  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:50 old-k8s-version-809747 kubelet[664]: E0920 19:05:50.841875     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.835911  653977 logs.go:138] Found kubelet problem: Sep 20 19:05:58 old-k8s-version-809747 kubelet[664]: E0920 19:05:58.842395     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.836240  653977 logs.go:138] Found kubelet problem: Sep 20 19:06:04 old-k8s-version-809747 kubelet[664]: E0920 19:06:04.841931     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.836424  653977 logs.go:138] Found kubelet problem: Sep 20 19:06:09 old-k8s-version-809747 kubelet[664]: E0920 19:06:09.842365     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:28.836754  653977 logs.go:138] Found kubelet problem: Sep 20 19:06:17 old-k8s-version-809747 kubelet[664]: E0920 19:06:17.842782     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:28.836938  653977 logs.go:138] Found kubelet problem: Sep 20 19:06:23 old-k8s-version-809747 kubelet[664]: E0920 19:06:23.842473     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0920 19:06:28.836948  653977 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:06:28.836964  653977 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:06:29.018737  653977 out.go:358] Setting ErrFile to fd 2...
	I0920 19:06:29.018770  653977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 19:06:29.018819  653977 out.go:270] X Problems detected in kubelet:
	W0920 19:06:29.018834  653977 out.go:270]   Sep 20 19:05:58 old-k8s-version-809747 kubelet[664]: E0920 19:05:58.842395     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:29.018842  653977 out.go:270]   Sep 20 19:06:04 old-k8s-version-809747 kubelet[664]: E0920 19:06:04.841931     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:29.018857  653977 out.go:270]   Sep 20 19:06:09 old-k8s-version-809747 kubelet[664]: E0920 19:06:09.842365     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 19:06:29.018863  653977 out.go:270]   Sep 20 19:06:17 old-k8s-version-809747 kubelet[664]: E0920 19:06:17.842782     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	W0920 19:06:29.018869  653977 out.go:270]   Sep 20 19:06:23 old-k8s-version-809747 kubelet[664]: E0920 19:06:23.842473     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0920 19:06:29.018875  653977 out.go:358] Setting ErrFile to fd 2...
	I0920 19:06:29.018889  653977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:06:30.550575  664902 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19679-440039/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-208780:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (5.309348894s)
	I0920 19:06:30.550612  664902 kic.go:203] duration metric: took 5.309506497s to extract preloaded images to volume ...
	W0920 19:06:30.550774  664902 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 19:06:30.550900  664902 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 19:06:30.623085  664902 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-208780 --name embed-certs-208780 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-208780 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-208780 --network embed-certs-208780 --ip 192.168.85.2 --volume embed-certs-208780:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0920 19:06:30.973508  664902 cli_runner.go:164] Run: docker container inspect embed-certs-208780 --format={{.State.Running}}
	I0920 19:06:31.004886  664902 cli_runner.go:164] Run: docker container inspect embed-certs-208780 --format={{.State.Status}}
	I0920 19:06:31.034518  664902 cli_runner.go:164] Run: docker exec embed-certs-208780 stat /var/lib/dpkg/alternatives/iptables
	I0920 19:06:31.119343  664902 oci.go:144] the created container "embed-certs-208780" has a running status.
	I0920 19:06:31.119370  664902 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19679-440039/.minikube/machines/embed-certs-208780/id_rsa...
	I0920 19:06:31.545715  664902 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19679-440039/.minikube/machines/embed-certs-208780/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 19:06:31.575719  664902 cli_runner.go:164] Run: docker container inspect embed-certs-208780 --format={{.State.Status}}
	I0920 19:06:31.601327  664902 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 19:06:31.601348  664902 kic_runner.go:114] Args: [docker exec --privileged embed-certs-208780 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 19:06:31.685548  664902 cli_runner.go:164] Run: docker container inspect embed-certs-208780 --format={{.State.Status}}
	I0920 19:06:31.713655  664902 machine.go:93] provisionDockerMachine start ...
	I0920 19:06:31.713747  664902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-208780
	I0920 19:06:31.740395  664902 main.go:141] libmachine: Using SSH client type: native
	I0920 19:06:31.740669  664902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I0920 19:06:31.740679  664902 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:06:31.930610  664902 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208780
	
	I0920 19:06:31.930686  664902 ubuntu.go:169] provisioning hostname "embed-certs-208780"
	I0920 19:06:31.930778  664902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-208780
	I0920 19:06:31.947730  664902 main.go:141] libmachine: Using SSH client type: native
	I0920 19:06:31.947989  664902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I0920 19:06:31.948007  664902 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208780 && echo "embed-certs-208780" | sudo tee /etc/hostname
	I0920 19:06:32.125174  664902 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208780
	
	I0920 19:06:32.125278  664902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-208780
	I0920 19:06:32.155835  664902 main.go:141] libmachine: Using SSH client type: native
	I0920 19:06:32.156086  664902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I0920 19:06:32.156109  664902 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208780/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:06:32.326095  664902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:06:32.326169  664902 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19679-440039/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-440039/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-440039/.minikube}
	I0920 19:06:32.326216  664902 ubuntu.go:177] setting up certificates
	I0920 19:06:32.326254  664902 provision.go:84] configureAuth start
	I0920 19:06:32.326386  664902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-208780
	I0920 19:06:32.343993  664902 provision.go:143] copyHostCerts
	I0920 19:06:32.344057  664902 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-440039/.minikube/ca.pem, removing ...
	I0920 19:06:32.344068  664902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-440039/.minikube/ca.pem
	I0920 19:06:32.344138  664902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-440039/.minikube/ca.pem (1082 bytes)
	I0920 19:06:32.344258  664902 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-440039/.minikube/cert.pem, removing ...
	I0920 19:06:32.344263  664902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-440039/.minikube/cert.pem
	I0920 19:06:32.344291  664902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-440039/.minikube/cert.pem (1123 bytes)
	I0920 19:06:32.344340  664902 exec_runner.go:144] found /home/jenkins/minikube-integration/19679-440039/.minikube/key.pem, removing ...
	I0920 19:06:32.344344  664902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19679-440039/.minikube/key.pem
	I0920 19:06:32.344368  664902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-440039/.minikube/key.pem (1675 bytes)
	I0920 19:06:32.344410  664902 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-440039/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208780 san=[127.0.0.1 192.168.85.2 embed-certs-208780 localhost minikube]
	I0920 19:06:32.597656  664902 provision.go:177] copyRemoteCerts
	I0920 19:06:32.597749  664902 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:06:32.597817  664902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-208780
	I0920 19:06:32.625994  664902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/embed-certs-208780/id_rsa Username:docker}
	I0920 19:06:32.731551  664902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 19:06:32.757393  664902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 19:06:32.783130  664902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:06:32.809505  664902 provision.go:87] duration metric: took 483.217164ms to configureAuth
	I0920 19:06:32.809539  664902 ubuntu.go:193] setting minikube options for container-runtime
	I0920 19:06:32.809743  664902 config.go:182] Loaded profile config "embed-certs-208780": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 19:06:32.809757  664902 machine.go:96] duration metric: took 1.096084635s to provisionDockerMachine
	I0920 19:06:32.809765  664902 client.go:171] duration metric: took 8.390104244s to LocalClient.Create
	I0920 19:06:32.809786  664902 start.go:167] duration metric: took 8.390163804s to libmachine.API.Create "embed-certs-208780"
	I0920 19:06:32.809800  664902 start.go:293] postStartSetup for "embed-certs-208780" (driver="docker")
	I0920 19:06:32.809810  664902 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:06:32.809871  664902 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:06:32.809916  664902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-208780
	I0920 19:06:32.828510  664902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/embed-certs-208780/id_rsa Username:docker}
	I0920 19:06:32.935752  664902 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:06:32.939055  664902 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 19:06:32.939088  664902 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 19:06:32.939099  664902 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 19:06:32.939105  664902 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 19:06:32.939116  664902 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-440039/.minikube/addons for local assets ...
	I0920 19:06:32.939174  664902 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-440039/.minikube/files for local assets ...
	I0920 19:06:32.939249  664902 filesync.go:149] local asset: /home/jenkins/minikube-integration/19679-440039/.minikube/files/etc/ssl/certs/4467832.pem -> 4467832.pem in /etc/ssl/certs
	I0920 19:06:32.939380  664902 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:06:32.948169  664902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/files/etc/ssl/certs/4467832.pem --> /etc/ssl/certs/4467832.pem (1708 bytes)
	I0920 19:06:32.973379  664902 start.go:296] duration metric: took 163.563799ms for postStartSetup
	I0920 19:06:32.973762  664902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-208780
	I0920 19:06:32.994600  664902 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/config.json ...
	I0920 19:06:32.994905  664902 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:06:32.994949  664902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-208780
	I0920 19:06:33.025672  664902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/embed-certs-208780/id_rsa Username:docker}
	I0920 19:06:33.123599  664902 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 19:06:33.128294  664902 start.go:128] duration metric: took 8.711607888s to createHost
	I0920 19:06:33.128318  664902 start.go:83] releasing machines lock for "embed-certs-208780", held for 8.711768946s
	I0920 19:06:33.128395  664902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-208780
	I0920 19:06:33.146094  664902 ssh_runner.go:195] Run: cat /version.json
	I0920 19:06:33.146108  664902 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:06:33.146178  664902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-208780
	I0920 19:06:33.146185  664902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-208780
	I0920 19:06:33.174212  664902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/embed-certs-208780/id_rsa Username:docker}
	I0920 19:06:33.175387  664902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/embed-certs-208780/id_rsa Username:docker}
	I0920 19:06:33.398827  664902 ssh_runner.go:195] Run: systemctl --version
	I0920 19:06:33.403169  664902 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 19:06:33.407901  664902 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 19:06:33.443225  664902 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 19:06:33.443308  664902 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:06:33.474284  664902 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 19:06:33.474335  664902 start.go:495] detecting cgroup driver to use...
	I0920 19:06:33.474370  664902 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 19:06:33.474457  664902 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 19:06:33.487291  664902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 19:06:33.499720  664902 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:06:33.499793  664902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:06:33.513362  664902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:06:33.528563  664902 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:06:33.626585  664902 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:06:33.727572  664902 docker.go:233] disabling docker service ...
	I0920 19:06:33.727684  664902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:06:33.749421  664902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:06:33.761630  664902 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:06:33.862377  664902 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:06:33.967231  664902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:06:33.979718  664902 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:06:34.005001  664902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 19:06:34.017091  664902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 19:06:34.030939  664902 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 19:06:34.031031  664902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 19:06:34.042377  664902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 19:06:34.053914  664902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 19:06:34.067214  664902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 19:06:34.085553  664902 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:06:34.095782  664902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 19:06:34.107830  664902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 19:06:34.119042  664902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 19:06:34.130947  664902 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:06:34.141108  664902 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:06:34.150505  664902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:06:34.248601  664902 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 19:06:34.415895  664902 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0920 19:06:34.416010  664902 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0920 19:06:34.420153  664902 start.go:563] Will wait 60s for crictl version
	I0920 19:06:34.420230  664902 ssh_runner.go:195] Run: which crictl
	I0920 19:06:34.423945  664902 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:06:34.463906  664902 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0920 19:06:34.464024  664902 ssh_runner.go:195] Run: containerd --version
	I0920 19:06:34.498929  664902 ssh_runner.go:195] Run: containerd --version
	I0920 19:06:34.527133  664902 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0920 19:06:34.529542  664902 cli_runner.go:164] Run: docker network inspect embed-certs-208780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 19:06:34.545775  664902 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0920 19:06:34.549616  664902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:06:34.561254  664902 kubeadm.go:883] updating cluster {Name:embed-certs-208780 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-208780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:06:34.561388  664902 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 19:06:34.561467  664902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:06:34.600774  664902 containerd.go:627] all images are preloaded for containerd runtime.
	I0920 19:06:34.600801  664902 containerd.go:534] Images already preloaded, skipping extraction
	I0920 19:06:34.600865  664902 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:06:34.653722  664902 containerd.go:627] all images are preloaded for containerd runtime.
	I0920 19:06:34.653746  664902 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:06:34.653756  664902 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 containerd true true} ...
	I0920 19:06:34.653854  664902 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-208780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-208780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:06:34.653922  664902 ssh_runner.go:195] Run: sudo crictl info
	I0920 19:06:34.697411  664902 cni.go:84] Creating CNI manager for ""
	I0920 19:06:34.697442  664902 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 19:06:34.697452  664902 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:06:34.697475  664902 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-208780 NodeName:embed-certs-208780 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:06:34.697611  664902 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-208780"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:06:34.697688  664902 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:06:34.707470  664902 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:06:34.707565  664902 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:06:34.716451  664902 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0920 19:06:34.734556  664902 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:06:34.753320  664902 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0920 19:06:34.772982  664902 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0920 19:06:34.776399  664902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:06:34.787662  664902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:06:34.892726  664902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:06:34.911215  664902 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780 for IP: 192.168.85.2
	I0920 19:06:34.911239  664902 certs.go:194] generating shared ca certs ...
	I0920 19:06:34.911256  664902 certs.go:226] acquiring lock for ca certs: {Name:mk3d7fcf9ade00248d7372a8cec4403eeffc64da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:06:34.911428  664902 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-440039/.minikube/ca.key
	I0920 19:06:34.911484  664902 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-440039/.minikube/proxy-client-ca.key
	I0920 19:06:34.911496  664902 certs.go:256] generating profile certs ...
	I0920 19:06:34.911556  664902 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/client.key
	I0920 19:06:34.911590  664902 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/client.crt with IP's: []
	I0920 19:06:35.789067  664902 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/client.crt ...
	I0920 19:06:35.789101  664902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/client.crt: {Name:mkaf13b24f6bb8e22c5eeddae26e8a8b238a9099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:06:35.789301  664902 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/client.key ...
	I0920 19:06:35.789314  664902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/client.key: {Name:mk91ebc337326469b242489facd16cdb4c96eea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:06:35.789902  664902 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/apiserver.key.ba0ee4f5
	I0920 19:06:35.789928  664902 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/apiserver.crt.ba0ee4f5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0920 19:06:36.246174  664902 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/apiserver.crt.ba0ee4f5 ...
	I0920 19:06:36.246211  664902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/apiserver.crt.ba0ee4f5: {Name:mkfb38bdb14b7ec9e1e42499a3a7f1b92deb0dce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:06:36.246943  664902 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/apiserver.key.ba0ee4f5 ...
	I0920 19:06:36.246968  664902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/apiserver.key.ba0ee4f5: {Name:mk1d524948ec0c93c36f6b0f68a56cc40f6ef017 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:06:36.247069  664902 certs.go:381] copying /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/apiserver.crt.ba0ee4f5 -> /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/apiserver.crt
	I0920 19:06:36.247166  664902 certs.go:385] copying /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/apiserver.key.ba0ee4f5 -> /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/apiserver.key
	I0920 19:06:36.247230  664902 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/proxy-client.key
	I0920 19:06:36.247249  664902 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/proxy-client.crt with IP's: []
	I0920 19:06:36.533763  664902 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/proxy-client.crt ...
	I0920 19:06:36.533795  664902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/proxy-client.crt: {Name:mk350e26d87c6bd3e26fa1d118d6669ba756ef4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:06:36.534561  664902 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/proxy-client.key ...
	I0920 19:06:36.534582  664902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/proxy-client.key: {Name:mk36132daa9b5f9a8d4745aeeb0757a67c2faef8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:06:36.535347  664902 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/446783.pem (1338 bytes)
	W0920 19:06:36.535398  664902 certs.go:480] ignoring /home/jenkins/minikube-integration/19679-440039/.minikube/certs/446783_empty.pem, impossibly tiny 0 bytes
	I0920 19:06:36.535417  664902 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 19:06:36.535451  664902 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/ca.pem (1082 bytes)
	I0920 19:06:36.535482  664902 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:06:36.535510  664902 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-440039/.minikube/certs/key.pem (1675 bytes)
	I0920 19:06:36.535558  664902 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-440039/.minikube/files/etc/ssl/certs/4467832.pem (1708 bytes)
	I0920 19:06:36.536565  664902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:06:36.567269  664902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 19:06:36.594605  664902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:06:36.622897  664902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:06:36.668089  664902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 19:06:36.695643  664902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:06:36.724428  664902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:06:36.749945  664902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/embed-certs-208780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:06:36.775722  664902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/files/etc/ssl/certs/4467832.pem --> /usr/share/ca-certificates/4467832.pem (1708 bytes)
	I0920 19:06:36.801489  664902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:06:36.828250  664902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-440039/.minikube/certs/446783.pem --> /usr/share/ca-certificates/446783.pem (1338 bytes)
	I0920 19:06:36.856100  664902 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:06:36.875764  664902 ssh_runner.go:195] Run: openssl version
	I0920 19:06:36.887189  664902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:06:36.896849  664902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:36.900407  664902 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:10 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:36.900501  664902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:06:36.907841  664902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:06:36.917192  664902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/446783.pem && ln -fs /usr/share/ca-certificates/446783.pem /etc/ssl/certs/446783.pem"
	I0920 19:06:36.926633  664902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/446783.pem
	I0920 19:06:36.930503  664902 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 18:20 /usr/share/ca-certificates/446783.pem
	I0920 19:06:36.930604  664902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/446783.pem
	I0920 19:06:36.938375  664902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/446783.pem /etc/ssl/certs/51391683.0"
	I0920 19:06:36.948370  664902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4467832.pem && ln -fs /usr/share/ca-certificates/4467832.pem /etc/ssl/certs/4467832.pem"
	I0920 19:06:36.958229  664902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4467832.pem
	I0920 19:06:36.961885  664902 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 18:20 /usr/share/ca-certificates/4467832.pem
	I0920 19:06:36.961972  664902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4467832.pem
	I0920 19:06:36.969068  664902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4467832.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:06:36.978796  664902 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:06:36.982267  664902 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 19:06:36.982381  664902 kubeadm.go:392] StartCluster: {Name:embed-certs-208780 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-208780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:06:36.982480  664902 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0920 19:06:36.982550  664902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:06:37.036302  664902 cri.go:89] found id: ""
	I0920 19:06:37.036436  664902 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:06:37.055766  664902 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:06:37.066495  664902 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 19:06:37.066616  664902 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:06:37.082378  664902 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:06:37.082398  664902 kubeadm.go:157] found existing configuration files:
	
	I0920 19:06:37.082451  664902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:06:37.092289  664902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:06:37.092374  664902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:06:37.101310  664902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:06:37.110809  664902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:06:37.110883  664902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:06:37.119690  664902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:06:37.128499  664902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:06:37.128575  664902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:06:37.137584  664902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:06:37.147315  664902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:06:37.147407  664902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:06:37.158144  664902 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 19:06:37.209270  664902 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:06:37.209552  664902 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:06:37.231267  664902 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 19:06:37.231381  664902 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0920 19:06:37.231434  664902 kubeadm.go:310] OS: Linux
	I0920 19:06:37.231505  664902 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 19:06:37.231574  664902 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 19:06:37.231646  664902 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 19:06:37.231716  664902 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 19:06:37.231785  664902 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 19:06:37.231851  664902 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 19:06:37.231911  664902 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 19:06:37.231982  664902 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 19:06:37.232051  664902 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 19:06:37.295290  664902 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:06:37.295484  664902 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:06:37.295633  664902 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:06:37.301442  664902 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:06:37.306077  664902 out.go:235]   - Generating certificates and keys ...
	I0920 19:06:37.306196  664902 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:06:37.306280  664902 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:06:38.111076  664902 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 19:06:39.019670  653977 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0920 19:06:39.029993  653977 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0920 19:06:39.032174  653977 out.go:201] 
	W0920 19:06:39.034164  653977 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0920 19:06:39.034205  653977 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0920 19:06:39.034237  653977 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0920 19:06:39.034250  653977 out.go:270] * 
	W0920 19:06:39.035133  653977 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 19:06:39.038378  653977 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	0c98e7f3d0165       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   8cc227382c509       dashboard-metrics-scraper-8d5bb5db8-z27f2
	78dbadd374e40       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   c018a1a078bd2       storage-provisioner
	a29e29bd825c6       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   0892bdde55047       kubernetes-dashboard-cd95d586-xd568
	e7bf621eab17a       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   0542638a6c1df       kube-proxy-tczmb
	66c9324267027       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   34d3dc3be77ec       busybox
	558af720d1a83       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   c018a1a078bd2       storage-provisioner
	ad9741225d44e       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   c2eaa1091bbec       kindnet-jz4sz
	812bd46cee424       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   f5bbeb3f1aedd       coredns-74ff55c5b-682lc
	d106efe91d320       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   4f4d1de53f75c       kube-apiserver-old-k8s-version-809747
	2097a062acf82       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   27cd1cb2c4322       kube-scheduler-old-k8s-version-809747
	fc97c68baab83       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   1a6665d996978       kube-controller-manager-old-k8s-version-809747
	085be7ace136b       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   fbbd0651c1fd5       etcd-old-k8s-version-809747
	1b2fc193df81b       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   2b6e75cb48369       busybox
	4142a2b627abf       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   32f01c3ad0248       coredns-74ff55c5b-682lc
	6451e639fa627       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   7da1fe9063ba6       kindnet-jz4sz
	899a291eb59c3       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   b59b66a8f5546       kube-proxy-tczmb
	ffeafa6ea9046       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   02797dcea4028       etcd-old-k8s-version-809747
	dc58bda86f634       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   0b3190b6a78c4       kube-controller-manager-old-k8s-version-809747
	7407be95357aa       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   f054b3622c6e4       kube-scheduler-old-k8s-version-809747
	fb7f2e4033f4a       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   8d1f1f6b3d8b1       kube-apiserver-old-k8s-version-809747
	
	
	==> containerd <==
	Sep 20 19:02:53 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:02:53.864136338Z" level=info msg="CreateContainer within sandbox \"8cc227382c50915e7f055d439af58609e29f9eb1eb4c3361bf5d28b10af94b2d\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"c3b46354ae0e4139240c56942b5c41741ba0ec9f8b25200c2ed7d08557ef6059\""
	Sep 20 19:02:53 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:02:53.866262159Z" level=info msg="StartContainer for \"c3b46354ae0e4139240c56942b5c41741ba0ec9f8b25200c2ed7d08557ef6059\""
	Sep 20 19:02:53 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:02:53.931454539Z" level=info msg="StartContainer for \"c3b46354ae0e4139240c56942b5c41741ba0ec9f8b25200c2ed7d08557ef6059\" returns successfully"
	Sep 20 19:02:53 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:02:53.972353728Z" level=info msg="shim disconnected" id=c3b46354ae0e4139240c56942b5c41741ba0ec9f8b25200c2ed7d08557ef6059 namespace=k8s.io
	Sep 20 19:02:53 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:02:53.972414010Z" level=warning msg="cleaning up after shim disconnected" id=c3b46354ae0e4139240c56942b5c41741ba0ec9f8b25200c2ed7d08557ef6059 namespace=k8s.io
	Sep 20 19:02:53 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:02:53.972424439Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 20 19:02:54 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:02:54.516969943Z" level=info msg="RemoveContainer for \"95c2e7b27182cdc6e01c35a85eba6e2564708544309aea715b3cf36f516b1b84\""
	Sep 20 19:02:54 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:02:54.523495193Z" level=info msg="RemoveContainer for \"95c2e7b27182cdc6e01c35a85eba6e2564708544309aea715b3cf36f516b1b84\" returns successfully"
	Sep 20 19:03:48 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:03:48.842887541Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 19:03:48 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:03:48.848957132Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 20 19:03:48 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:03:48.850587259Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 20 19:03:48 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:03:48.850723332Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 20 19:04:20 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:04:20.843997888Z" level=info msg="CreateContainer within sandbox \"8cc227382c50915e7f055d439af58609e29f9eb1eb4c3361bf5d28b10af94b2d\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Sep 20 19:04:20 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:04:20.867116428Z" level=info msg="CreateContainer within sandbox \"8cc227382c50915e7f055d439af58609e29f9eb1eb4c3361bf5d28b10af94b2d\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"0c98e7f3d016526c516965888a4391a3ed700f52c82b52dbf151c403a9d21a1f\""
	Sep 20 19:04:20 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:04:20.867924957Z" level=info msg="StartContainer for \"0c98e7f3d016526c516965888a4391a3ed700f52c82b52dbf151c403a9d21a1f\""
	Sep 20 19:04:20 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:04:20.947707575Z" level=info msg="StartContainer for \"0c98e7f3d016526c516965888a4391a3ed700f52c82b52dbf151c403a9d21a1f\" returns successfully"
	Sep 20 19:04:20 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:04:20.973908303Z" level=info msg="shim disconnected" id=0c98e7f3d016526c516965888a4391a3ed700f52c82b52dbf151c403a9d21a1f namespace=k8s.io
	Sep 20 19:04:20 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:04:20.973994522Z" level=warning msg="cleaning up after shim disconnected" id=0c98e7f3d016526c516965888a4391a3ed700f52c82b52dbf151c403a9d21a1f namespace=k8s.io
	Sep 20 19:04:20 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:04:20.974009661Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 20 19:04:21 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:04:21.745976313Z" level=info msg="RemoveContainer for \"c3b46354ae0e4139240c56942b5c41741ba0ec9f8b25200c2ed7d08557ef6059\""
	Sep 20 19:04:21 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:04:21.752798180Z" level=info msg="RemoveContainer for \"c3b46354ae0e4139240c56942b5c41741ba0ec9f8b25200c2ed7d08557ef6059\" returns successfully"
	Sep 20 19:06:34 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:06:34.842969744Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 19:06:34 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:06:34.885764031Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 20 19:06:34 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:06:34.887397319Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 20 19:06:34 old-k8s-version-809747 containerd[571]: time="2024-09-20T19:06:34.887543115Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [4142a2b627abf853d590ea7ce75f7deda0b722cae15d402d8283d5eba06bcada] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:58376 - 31684 "HINFO IN 3451975236257540435.8144399278605540589. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010957265s
	
	
	==> coredns [812bd46cee424729eb0e399a232c5b3f90764d593c9aca7c0f74a4afe099a327] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:40062 - 64181 "HINFO IN 6255402163476102179.5470227237908667855. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011273756s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0920 19:01:24.567685       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-20 19:00:54.56723247 +0000 UTC m=+0.021312403) (total time: 30.000351087s):
	Trace[2019727887]: [30.000351087s] [30.000351087s] END
	E0920 19:01:24.567721       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0920 19:01:24.568109       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-20 19:00:54.566641065 +0000 UTC m=+0.020720998) (total time: 30.001451666s):
	Trace[939984059]: [30.001451666s] [30.001451666s] END
	E0920 19:01:24.568162       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0920 19:01:24.568290       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-20 19:00:54.56765874 +0000 UTC m=+0.021738673) (total time: 30.00061547s):
	Trace[911902081]: [30.00061547s] [30.00061547s] END
	E0920 19:01:24.568305       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-809747
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-809747
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=old-k8s-version-809747
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_58_10_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:58:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-809747
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:06:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:01:42 +0000   Fri, 20 Sep 2024 18:57:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:01:42 +0000   Fri, 20 Sep 2024 18:57:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:01:42 +0000   Fri, 20 Sep 2024 18:57:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:01:42 +0000   Fri, 20 Sep 2024 18:58:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-809747
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 f17185a013eb45b9ad4ddc75ddfefa8a
	  System UUID:                b386f02c-96dd-4c82-b396-1fb15cc3eff5
	  Boot ID:                    cfeac633-1b4b-4878-a7d1-bdd76da68a0f
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 coredns-74ff55c5b-682lc                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m16s
	  kube-system                 etcd-old-k8s-version-809747                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m23s
	  kube-system                 kindnet-jz4sz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m16s
	  kube-system                 kube-apiserver-old-k8s-version-809747             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-controller-manager-old-k8s-version-809747    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-proxy-tczmb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 kube-scheduler-old-k8s-version-809747             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 metrics-server-9975d5f86-bx26v                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m29s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-z27f2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-xd568               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m44s (x5 over 8m44s)  kubelet     Node old-k8s-version-809747 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m44s (x5 over 8m44s)  kubelet     Node old-k8s-version-809747 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m44s (x4 over 8m44s)  kubelet     Node old-k8s-version-809747 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m44s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 8m23s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m23s                  kubelet     Node old-k8s-version-809747 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m23s                  kubelet     Node old-k8s-version-809747 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m23s                  kubelet     Node old-k8s-version-809747 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m23s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m16s                  kubelet     Node old-k8s-version-809747 status is now: NodeReady
	  Normal  Starting                 8m15s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m                     kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m (x7 over 6m)        kubelet     Node old-k8s-version-809747 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m (x8 over 6m)        kubelet     Node old-k8s-version-809747 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m (x8 over 6m)        kubelet     Node old-k8s-version-809747 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m                     kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m45s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Sep20 17:41] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep20 17:43] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.012326] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.005861] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.189191] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[Sep20 18:22] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=00000025 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001031] FS-Cache: O-cookie d=000000007b04e949{9P.session} n=00000000fd4f4036
	[  +0.001114] FS-Cache: O-key=[10] '34323936373734333137'
	[  +0.000820] FS-Cache: N-cookie c=00000026 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000007b04e949{9P.session} n=00000000b2f1fccb
	[  +0.001112] FS-Cache: N-key=[10] '34323936373734333137'
	
	
	==> etcd [085be7ace136b801ec32172b7f7a4a18032b49fc30e4fc31c9e16a3aeb8fcf70] <==
	2024-09-20 19:02:38.463396 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:02:48.463039 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:02:58.463234 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:03:08.463436 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:03:18.463109 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:03:28.463341 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:03:38.463037 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:03:48.463049 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:03:58.463212 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:04:08.463047 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:04:18.463265 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:04:28.463063 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:04:38.463080 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:04:48.463058 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:04:58.463158 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:05:08.463227 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:05:18.463221 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:05:28.463039 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:05:38.463084 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:05:48.463153 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:05:58.463130 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:06:08.463227 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:06:18.463358 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:06:28.463920 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:06:38.463269 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [ffeafa6ea90467e9aee01c15062a10d7e7d8deb2522a37b559544d399360d4c5] <==
	2024-09-20 18:57:58.319664 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	raft2024/09/20 18:57:58 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2024/09/20 18:57:58 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/09/20 18:57:58 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/09/20 18:57:58 INFO: ea7e25599daad906 became leader at term 2
	raft2024/09/20 18:57:58 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-09-20 18:57:58.459115 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-20 18:57:58.462504 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-20 18:57:58.462713 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-20 18:57:58.462815 I | etcdserver: published {Name:old-k8s-version-809747 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-09-20 18:57:58.463075 I | embed: ready to serve client requests
	2024-09-20 18:57:58.464480 I | embed: serving client requests on 192.168.76.2:2379
	2024-09-20 18:57:58.467163 I | embed: ready to serve client requests
	2024-09-20 18:57:58.474415 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-20 18:58:26.366339 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:58:35.058489 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:58:45.059599 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:58:55.058669 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:59:05.058396 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:59:15.059362 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:59:25.058986 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:59:35.058458 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:59:45.060014 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:59:55.058412 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 19:00:05.058623 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 19:06:41 up  2:49,  0 users,  load average: 1.47, 1.91, 2.45
	Linux old-k8s-version-809747 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6451e639fa627b6f7eb0c0d6d43b8de7a1c9297b75f90eba2d66578a806ba13c] <==
	I0920 18:58:28.919196       1 controller.go:338] Waiting for informer caches to sync
	I0920 18:58:28.919202       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0920 18:58:29.119771       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0920 18:58:29.119938       1 metrics.go:61] Registering metrics
	I0920 18:58:29.120078       1 controller.go:374] Syncing nftables rules
	I0920 18:58:38.923134       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 18:58:38.923219       1 main.go:299] handling current node
	I0920 18:58:48.919500       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 18:58:48.919660       1 main.go:299] handling current node
	I0920 18:58:58.923150       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 18:58:58.923187       1 main.go:299] handling current node
	I0920 18:59:08.923788       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 18:59:08.923821       1 main.go:299] handling current node
	I0920 18:59:18.919510       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 18:59:18.919544       1 main.go:299] handling current node
	I0920 18:59:28.919214       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 18:59:28.919287       1 main.go:299] handling current node
	I0920 18:59:38.920547       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 18:59:38.920610       1 main.go:299] handling current node
	I0920 18:59:48.919128       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 18:59:48.919161       1 main.go:299] handling current node
	I0920 18:59:58.919340       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 18:59:58.919374       1 main.go:299] handling current node
	I0920 19:00:08.920547       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 19:00:08.920595       1 main.go:299] handling current node
	
	
	==> kindnet [ad9741225d44ef3cd93a867e4aedc84a8ed23877c1b05cd8229cfb1260ced7de] <==
	I0920 19:04:35.326438       1 main.go:299] handling current node
	I0920 19:04:45.327348       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 19:04:45.327389       1 main.go:299] handling current node
	I0920 19:04:55.319419       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 19:04:55.319516       1 main.go:299] handling current node
	I0920 19:05:05.326486       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 19:05:05.326523       1 main.go:299] handling current node
	I0920 19:05:15.327774       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 19:05:15.327811       1 main.go:299] handling current node
	I0920 19:05:25.321140       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 19:05:25.321174       1 main.go:299] handling current node
	I0920 19:05:35.323845       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 19:05:35.323887       1 main.go:299] handling current node
	I0920 19:05:45.328019       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 19:05:45.328066       1 main.go:299] handling current node
	I0920 19:05:55.319819       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 19:05:55.319859       1 main.go:299] handling current node
	I0920 19:06:05.322564       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 19:06:05.322605       1 main.go:299] handling current node
	I0920 19:06:15.328439       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 19:06:15.328479       1 main.go:299] handling current node
	I0920 19:06:25.327952       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 19:06:25.327985       1 main.go:299] handling current node
	I0920 19:06:35.322385       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 19:06:35.322622       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d106efe91d320b3b45d7444b13bc8682a69b480fd3d2c46ef29ca1522cc0dba7] <==
	I0920 19:03:12.110812       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 19:03:12.110822       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0920 19:03:44.428924       1 client.go:360] parsed scheme: "passthrough"
	I0920 19:03:44.428985       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 19:03:44.428997       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0920 19:03:55.142660       1 handler_proxy.go:102] no RequestInfo found in the context
	E0920 19:03:55.142738       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0920 19:03:55.142750       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 19:04:19.661847       1 client.go:360] parsed scheme: "passthrough"
	I0920 19:04:19.661917       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 19:04:19.661927       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0920 19:04:52.894511       1 client.go:360] parsed scheme: "passthrough"
	I0920 19:04:52.894559       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 19:04:52.894568       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0920 19:05:35.123321       1 client.go:360] parsed scheme: "passthrough"
	I0920 19:05:35.123366       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 19:05:35.123376       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0920 19:05:53.517025       1 handler_proxy.go:102] no RequestInfo found in the context
	E0920 19:05:53.517256       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0920 19:05:53.517338       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 19:06:12.377995       1 client.go:360] parsed scheme: "passthrough"
	I0920 19:06:12.378036       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 19:06:12.378043       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [fb7f2e4033f4a4873ebbbd7a1613734b2ca70f9f331577110431f7b00029efa4] <==
	I0920 18:58:07.575267       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0920 18:58:07.583589       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0920 18:58:07.588477       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0920 18:58:07.588505       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0920 18:58:08.108242       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 18:58:08.155074       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0920 18:58:08.270414       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0920 18:58:08.271917       1 controller.go:606] quota admission added evaluator for: endpoints
	I0920 18:58:08.281280       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 18:58:09.288209       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0920 18:58:09.856161       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0920 18:58:09.922591       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0920 18:58:18.336329       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 18:58:25.263178       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0920 18:58:25.343867       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0920 18:58:37.185875       1 client.go:360] parsed scheme: "passthrough"
	I0920 18:58:37.185919       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 18:58:37.185927       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0920 18:59:10.862907       1 client.go:360] parsed scheme: "passthrough"
	I0920 18:59:10.862967       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 18:59:10.862985       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0920 18:59:51.032066       1 client.go:360] parsed scheme: "passthrough"
	I0920 18:59:51.032353       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 18:59:51.032400       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0920 19:00:12.020370       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [dc58bda86f6344646d9480f42703769ac45b07cf183c47ae95ac99cf89959d32] <==
	I0920 18:58:25.322989       1 event.go:291] "Event occurred" object="old-k8s-version-809747" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-809747 event: Registered Node old-k8s-version-809747 in Controller"
	I0920 18:58:25.305436       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0920 18:58:25.306489       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0920 18:58:25.323322       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0920 18:58:25.306508       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0920 18:58:25.306520       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0920 18:58:25.353340       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0920 18:58:25.379330       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-w5hf6"
	I0920 18:58:25.435871       1 shared_informer.go:247] Caches are synced for disruption 
	I0920 18:58:25.436981       1 disruption.go:339] Sending events to api server.
	I0920 18:58:25.443906       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tczmb"
	I0920 18:58:25.447207       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jz4sz"
	I0920 18:58:25.448088       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-682lc"
	I0920 18:58:25.459430       1 shared_informer.go:247] Caches are synced for resource quota 
	I0920 18:58:25.467368       1 shared_informer.go:247] Caches are synced for resource quota 
	I0920 18:58:25.511897       1 shared_informer.go:247] Caches are synced for stateful set 
	I0920 18:58:25.631042       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0920 18:58:25.929062       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0920 18:58:25.929084       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0920 18:58:25.931458       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0920 18:58:27.051482       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0920 18:58:27.064304       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-w5hf6"
	I0920 19:00:11.712339       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0920 19:00:11.754378       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0920 19:00:11.795508       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	
	
	==> kube-controller-manager [fc97c68baab83ca4a8e34d1add491da97117d8436842e907b6c7ef5194a66548] <==
	W0920 19:02:16.390635       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 19:02:42.442256       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 19:02:48.041136       1 request.go:655] Throttling request took 1.048520946s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0920 19:02:48.892555       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 19:03:12.945151       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 19:03:20.542944       1 request.go:655] Throttling request took 1.048457213s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0920 19:03:21.394510       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 19:03:43.447456       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 19:03:53.045095       1 request.go:655] Throttling request took 1.0483931s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0920 19:03:53.896534       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 19:04:13.949354       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 19:04:25.547186       1 request.go:655] Throttling request took 1.044016943s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1?timeout=32s
	W0920 19:04:26.398779       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 19:04:44.451279       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 19:04:58.049356       1 request.go:655] Throttling request took 1.045209184s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0920 19:04:58.900909       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 19:05:14.953054       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 19:05:30.551377       1 request.go:655] Throttling request took 1.048296387s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0920 19:05:31.402942       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 19:05:45.455223       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 19:06:03.053502       1 request.go:655] Throttling request took 1.047596345s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0920 19:06:03.905223       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 19:06:15.957171       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 19:06:35.554918       1 request.go:655] Throttling request took 1.047976435s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0920 19:06:36.406840       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [899a291eb59c3b5f9d6b0939098b577edab6865605118eb9e26163895e023c2d] <==
	I0920 18:58:26.463955       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0920 18:58:26.464047       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0920 18:58:26.538422       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0920 18:58:26.538524       1 server_others.go:185] Using iptables Proxier.
	I0920 18:58:26.538750       1 server.go:650] Version: v1.20.0
	I0920 18:58:26.539238       1 config.go:315] Starting service config controller
	I0920 18:58:26.539251       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0920 18:58:26.541771       1 config.go:224] Starting endpoint slice config controller
	I0920 18:58:26.541784       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0920 18:58:26.639574       1 shared_informer.go:247] Caches are synced for service config 
	I0920 18:58:26.642813       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [e7bf621eab17a86a9873dcbf28cace6f79a68a72ffa3d75afc890ecc389a86ec] <==
	I0920 19:00:56.431360       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0920 19:00:56.431437       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0920 19:00:56.465636       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0920 19:00:56.465740       1 server_others.go:185] Using iptables Proxier.
	I0920 19:00:56.466087       1 server.go:650] Version: v1.20.0
	I0920 19:00:56.466680       1 config.go:224] Starting endpoint slice config controller
	I0920 19:00:56.466698       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0920 19:00:56.466851       1 config.go:315] Starting service config controller
	I0920 19:00:56.466884       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0920 19:00:56.566903       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0920 19:00:56.567101       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [2097a062acf828599a57c98a30c658af273f5825500a3bf75e77de707c622677] <==
	I0920 19:00:48.717942       1 serving.go:331] Generated self-signed cert in-memory
	W0920 19:00:52.119136       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 19:00:52.119452       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 19:00:52.119526       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 19:00:52.119553       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 19:00:52.375891       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0920 19:00:52.378959       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 19:00:52.378977       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 19:00:52.378994       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0920 19:00:52.522247       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 19:00:52.522379       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 19:00:52.522449       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 19:00:52.522516       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 19:00:52.522583       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 19:00:52.522648       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 19:00:52.522715       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 19:00:52.522768       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 19:00:52.522815       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 19:00:52.522874       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 19:00:52.522937       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 19:00:52.537023       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0920 19:00:54.179171       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [7407be95357aafd1f43bd30ec343dcf522e2b4e01f1263ebcbc21b337b0c8043] <==
	W0920 18:58:06.791359       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:58:06.791365       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:58:06.853278       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	E0920 18:58:06.857116       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 18:58:06.859822       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 18:58:06.863002       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 18:58:06.863122       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0920 18:58:06.863258       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0920 18:58:06.863319       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:58:06.869065       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 18:58:06.866708       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 18:58:06.866849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 18:58:06.866976       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 18:58:06.867098       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 18:58:06.867207       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:58:06.867341       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:58:06.871143       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:58:06.871617       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 18:58:07.702192       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 18:58:07.827774       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 18:58:07.871471       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 18:58:07.879312       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:58:07.915507       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:58:08.014284       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0920 18:58:10.169290       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 20 19:05:13 old-k8s-version-809747 kubelet[664]: E0920 19:05:13.842057     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	Sep 20 19:05:14 old-k8s-version-809747 kubelet[664]: E0920 19:05:14.842283     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 19:05:24 old-k8s-version-809747 kubelet[664]: I0920 19:05:24.841742     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0c98e7f3d016526c516965888a4391a3ed700f52c82b52dbf151c403a9d21a1f
	Sep 20 19:05:24 old-k8s-version-809747 kubelet[664]: E0920 19:05:24.842121     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	Sep 20 19:05:28 old-k8s-version-809747 kubelet[664]: E0920 19:05:28.842363     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 19:05:37 old-k8s-version-809747 kubelet[664]: I0920 19:05:37.841544     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0c98e7f3d016526c516965888a4391a3ed700f52c82b52dbf151c403a9d21a1f
	Sep 20 19:05:37 old-k8s-version-809747 kubelet[664]: E0920 19:05:37.842390     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	Sep 20 19:05:43 old-k8s-version-809747 kubelet[664]: E0920 19:05:43.842490     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 19:05:50 old-k8s-version-809747 kubelet[664]: I0920 19:05:50.841534     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0c98e7f3d016526c516965888a4391a3ed700f52c82b52dbf151c403a9d21a1f
	Sep 20 19:05:50 old-k8s-version-809747 kubelet[664]: E0920 19:05:50.841875     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	Sep 20 19:05:58 old-k8s-version-809747 kubelet[664]: E0920 19:05:58.842395     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 19:06:04 old-k8s-version-809747 kubelet[664]: I0920 19:06:04.841549     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0c98e7f3d016526c516965888a4391a3ed700f52c82b52dbf151c403a9d21a1f
	Sep 20 19:06:04 old-k8s-version-809747 kubelet[664]: E0920 19:06:04.841931     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	Sep 20 19:06:09 old-k8s-version-809747 kubelet[664]: E0920 19:06:09.842365     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 19:06:17 old-k8s-version-809747 kubelet[664]: I0920 19:06:17.841709     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0c98e7f3d016526c516965888a4391a3ed700f52c82b52dbf151c403a9d21a1f
	Sep 20 19:06:17 old-k8s-version-809747 kubelet[664]: E0920 19:06:17.842782     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	Sep 20 19:06:23 old-k8s-version-809747 kubelet[664]: E0920 19:06:23.842473     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 19:06:28 old-k8s-version-809747 kubelet[664]: I0920 19:06:28.841617     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0c98e7f3d016526c516965888a4391a3ed700f52c82b52dbf151c403a9d21a1f
	Sep 20 19:06:28 old-k8s-version-809747 kubelet[664]: E0920 19:06:28.842755     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	Sep 20 19:06:34 old-k8s-version-809747 kubelet[664]: E0920 19:06:34.887923     664 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 20 19:06:34 old-k8s-version-809747 kubelet[664]: E0920 19:06:34.887985     664 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 20 19:06:34 old-k8s-version-809747 kubelet[664]: E0920 19:06:34.888127     664 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-fk86c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-bx26v_kube-system(c41b732
f-de87-4ddf-ba27-2a549da6b22f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 20 19:06:34 old-k8s-version-809747 kubelet[664]: E0920 19:06:34.888172     664 pod_workers.go:191] Error syncing pod c41b732f-de87-4ddf-ba27-2a549da6b22f ("metrics-server-9975d5f86-bx26v_kube-system(c41b732f-de87-4ddf-ba27-2a549da6b22f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 20 19:06:39 old-k8s-version-809747 kubelet[664]: I0920 19:06:39.841643     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0c98e7f3d016526c516965888a4391a3ed700f52c82b52dbf151c403a9d21a1f
	Sep 20 19:06:39 old-k8s-version-809747 kubelet[664]: E0920 19:06:39.842231     664 pod_workers.go:191] Error syncing pod fdb51bf1-6934-4d9e-88fc-19e126a6cdf4 ("dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z27f2_kubernetes-dashboard(fdb51bf1-6934-4d9e-88fc-19e126a6cdf4)"
	
	
	==> kubernetes-dashboard [a29e29bd825c67d29d71f1b611800e16571bd7e880bbb8077e23f09e4d6b05fd] <==
	2024/09/20 19:01:15 Using namespace: kubernetes-dashboard
	2024/09/20 19:01:15 Using in-cluster config to connect to apiserver
	2024/09/20 19:01:15 Using secret token for csrf signing
	2024/09/20 19:01:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/20 19:01:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/20 19:01:15 Successful initial request to the apiserver, version: v1.20.0
	2024/09/20 19:01:15 Generating JWE encryption key
	2024/09/20 19:01:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/20 19:01:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/20 19:01:16 Initializing JWE encryption key from synchronized object
	2024/09/20 19:01:16 Creating in-cluster Sidecar client
	2024/09/20 19:01:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 19:01:16 Serving insecurely on HTTP port: 9090
	2024/09/20 19:01:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 19:02:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 19:02:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 19:03:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 19:03:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 19:04:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 19:04:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 19:05:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 19:05:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 19:06:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 19:01:15 Starting overwatch
	
	
	==> storage-provisioner [558af720d1a83df5088daec078428c689fe95a5e71a61e7c6b36ca73bcbd321f] <==
	I0920 19:00:54.775241       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0920 19:01:24.777314       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [78dbadd374e4006947e4c0a4b40642cfe3d16108cbdaad7fa8a335c2eadfccc6] <==
	I0920 19:01:38.147440       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 19:01:38.203047       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 19:01:38.203364       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 19:01:55.695986       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 19:01:55.696652       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f49aae3a-f8f8-49ab-a59f-7ad69dbaa272", APIVersion:"v1", ResourceVersion:"861", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-809747_269b0b69-c519-4c8c-9a14-93e6f0109a84 became leader
	I0920 19:01:55.697220       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-809747_269b0b69-c519-4c8c-9a14-93e6f0109a84!
	I0920 19:01:55.797845       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-809747_269b0b69-c519-4c8c-9a14-93e6f0109a84!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-809747 -n old-k8s-version-809747
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-809747 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-bx26v
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-809747 describe pod metrics-server-9975d5f86-bx26v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-809747 describe pod metrics-server-9975d5f86-bx26v: exit status 1 (119.78791ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-bx26v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-809747 describe pod metrics-server-9975d5f86-bx26v: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (378.31s)

                                                
                                    

Test pass (298/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.37
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 6.02
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 216.44
31 TestAddons/serial/GCPAuth/Namespaces 0.21
33 TestAddons/parallel/Registry 16.22
34 TestAddons/parallel/Ingress 20.47
35 TestAddons/parallel/InspektorGadget 10.96
36 TestAddons/parallel/MetricsServer 6.92
38 TestAddons/parallel/CSI 48.55
39 TestAddons/parallel/Headlamp 17.02
40 TestAddons/parallel/CloudSpanner 6.62
41 TestAddons/parallel/LocalPath 53.18
42 TestAddons/parallel/NvidiaDevicePlugin 5.54
43 TestAddons/parallel/Yakd 11.83
44 TestAddons/StoppedEnableDisable 12.35
45 TestCertOptions 37.66
46 TestCertExpiration 226.72
48 TestForceSystemdFlag 36.57
49 TestForceSystemdEnv 43.03
50 TestDockerEnvContainerd 47.86
55 TestErrorSpam/setup 30.14
56 TestErrorSpam/start 0.7
57 TestErrorSpam/status 1.13
58 TestErrorSpam/pause 1.82
59 TestErrorSpam/unpause 1.91
60 TestErrorSpam/stop 1.48
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 48.17
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 6.37
67 TestFunctional/serial/KubeContext 0.07
68 TestFunctional/serial/KubectlGetPods 0.09
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.02
72 TestFunctional/serial/CacheCmd/cache/add_local 1.3
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
76 TestFunctional/serial/CacheCmd/cache/cache_reload 2.08
77 TestFunctional/serial/CacheCmd/cache/delete 0.15
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
80 TestFunctional/serial/ExtraConfig 38.31
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.75
83 TestFunctional/serial/LogsFileCmd 1.72
84 TestFunctional/serial/InvalidService 4.4
86 TestFunctional/parallel/ConfigCmd 0.46
87 TestFunctional/parallel/DashboardCmd 9.72
88 TestFunctional/parallel/DryRun 0.62
89 TestFunctional/parallel/InternationalLanguage 0.3
90 TestFunctional/parallel/StatusCmd 1.43
94 TestFunctional/parallel/ServiceCmdConnect 7.7
95 TestFunctional/parallel/AddonsCmd 0.12
96 TestFunctional/parallel/PersistentVolumeClaim 23.78
98 TestFunctional/parallel/SSHCmd 0.56
99 TestFunctional/parallel/CpCmd 1.99
101 TestFunctional/parallel/FileSync 0.38
102 TestFunctional/parallel/CertSync 2.12
106 TestFunctional/parallel/NodeLabels 0.11
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.76
110 TestFunctional/parallel/License 0.22
111 TestFunctional/parallel/Version/short 0.07
112 TestFunctional/parallel/Version/components 1.55
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
117 TestFunctional/parallel/ImageCommands/ImageBuild 3.98
118 TestFunctional/parallel/ImageCommands/Setup 0.74
119 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
120 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
121 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.29
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.51
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.45
124 TestFunctional/parallel/ServiceCmd/DeployApp 11.25
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.46
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
127 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.79
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
131 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.34
135 TestFunctional/parallel/ServiceCmd/List 0.35
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
138 TestFunctional/parallel/ServiceCmd/Format 0.4
139 TestFunctional/parallel/ServiceCmd/URL 0.37
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
146 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
147 TestFunctional/parallel/ProfileCmd/profile_list 0.41
148 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
149 TestFunctional/parallel/MountCmd/any-port 7.95
150 TestFunctional/parallel/MountCmd/specific-port 1.7
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.81
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 118.45
159 TestMultiControlPlane/serial/DeployApp 32.58
160 TestMultiControlPlane/serial/PingHostFromPods 1.71
161 TestMultiControlPlane/serial/AddWorkerNode 25.35
162 TestMultiControlPlane/serial/NodeLabels 0.11
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
164 TestMultiControlPlane/serial/CopyFile 19.4
165 TestMultiControlPlane/serial/StopSecondaryNode 13.02
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.79
167 TestMultiControlPlane/serial/RestartSecondaryNode 19.96
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.56
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 126.46
170 TestMultiControlPlane/serial/DeleteSecondaryNode 10.88
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
172 TestMultiControlPlane/serial/StopCluster 36.07
173 TestMultiControlPlane/serial/RestartCluster 77.52
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
175 TestMultiControlPlane/serial/AddSecondaryNode 41.83
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.6
180 TestJSONOutput/start/Command 85.95
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.78
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.68
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.86
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.23
205 TestKicCustomNetwork/create_custom_network 44.32
206 TestKicCustomNetwork/use_default_bridge_network 34.64
207 TestKicExistingNetwork 32.16
208 TestKicCustomSubnet 33.46
209 TestKicStaticIP 33.71
210 TestMainNoArgs 0.06
211 TestMinikubeProfile 67.85
214 TestMountStart/serial/StartWithMountFirst 6.55
215 TestMountStart/serial/VerifyMountFirst 0.6
216 TestMountStart/serial/StartWithMountSecond 7.26
217 TestMountStart/serial/VerifyMountSecond 0.27
218 TestMountStart/serial/DeleteFirst 1.64
219 TestMountStart/serial/VerifyMountPostDelete 0.26
220 TestMountStart/serial/Stop 1.22
221 TestMountStart/serial/RestartStopped 7.52
222 TestMountStart/serial/VerifyMountPostStop 0.26
225 TestMultiNode/serial/FreshStart2Nodes 68.57
226 TestMultiNode/serial/DeployApp2Nodes 17.46
227 TestMultiNode/serial/PingHostFrom2Pods 0.97
228 TestMultiNode/serial/AddNode 17.03
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.71
231 TestMultiNode/serial/CopyFile 10.39
232 TestMultiNode/serial/StopNode 2.33
233 TestMultiNode/serial/StartAfterStop 9.66
234 TestMultiNode/serial/RestartKeepsNodes 105.69
235 TestMultiNode/serial/DeleteNode 5.59
236 TestMultiNode/serial/StopMultiNode 24.11
237 TestMultiNode/serial/RestartMultiNode 46.95
238 TestMultiNode/serial/ValidateNameConflict 34.32
243 TestPreload 127.41
245 TestScheduledStopUnix 107.26
248 TestInsufficientStorage 12.95
249 TestRunningBinaryUpgrade 86.25
251 TestKubernetesUpgrade 347.32
252 TestMissingContainerUpgrade 183.58
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 38.59
256 TestNoKubernetes/serial/StartWithStopK8s 19.17
257 TestNoKubernetes/serial/Start 8.91
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
259 TestNoKubernetes/serial/ProfileList 0.98
260 TestNoKubernetes/serial/Stop 1.2
261 TestNoKubernetes/serial/StartNoArgs 7.13
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
263 TestStoppedBinaryUpgrade/Setup 0.7
264 TestStoppedBinaryUpgrade/Upgrade 110.65
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.29
274 TestPause/serial/Start 56.88
275 TestPause/serial/SecondStartNoReconfiguration 7.36
276 TestPause/serial/Pause 1.07
277 TestPause/serial/VerifyStatus 0.44
278 TestPause/serial/Unpause 0.92
279 TestPause/serial/PauseAgain 0.92
280 TestPause/serial/DeletePaused 2.91
281 TestPause/serial/VerifyDeletedResources 0.77
289 TestNetworkPlugins/group/false 5.35
294 TestStartStop/group/old-k8s-version/serial/FirstStart 161.18
296 TestStartStop/group/no-preload/serial/FirstStart 78.52
297 TestStartStop/group/old-k8s-version/serial/DeployApp 10.29
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.31
299 TestStartStop/group/old-k8s-version/serial/Stop 12.41
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
302 TestStartStop/group/no-preload/serial/DeployApp 9.47
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.31
304 TestStartStop/group/no-preload/serial/Stop 12.13
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
306 TestStartStop/group/no-preload/serial/SecondStart 267.68
307 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
309 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
310 TestStartStop/group/no-preload/serial/Pause 3.11
312 TestStartStop/group/embed-certs/serial/FirstStart 82.34
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.22
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.33
316 TestStartStop/group/old-k8s-version/serial/Pause 3.51
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.46
319 TestStartStop/group/embed-certs/serial/DeployApp 9.36
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
321 TestStartStop/group/embed-certs/serial/Stop 12.11
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
323 TestStartStop/group/embed-certs/serial/SecondStart 266.63
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.43
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.2
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.14
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.54
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
332 TestStartStop/group/embed-certs/serial/Pause 3.2
334 TestStartStop/group/newest-cni/serial/FirstStart 35.79
335 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.21
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.02
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.5
341 TestStartStop/group/newest-cni/serial/Stop 1.35
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
343 TestStartStop/group/newest-cni/serial/SecondStart 22.52
344 TestNetworkPlugins/group/auto/Start 86.54
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
348 TestStartStop/group/newest-cni/serial/Pause 3.59
349 TestNetworkPlugins/group/kindnet/Start 86.38
350 TestNetworkPlugins/group/auto/KubeletFlags 0.29
351 TestNetworkPlugins/group/auto/NetCatPod 11.3
352 TestNetworkPlugins/group/auto/DNS 0.2
353 TestNetworkPlugins/group/auto/Localhost 0.17
354 TestNetworkPlugins/group/auto/HairPin 0.17
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
357 TestNetworkPlugins/group/kindnet/NetCatPod 9.46
358 TestNetworkPlugins/group/calico/Start 72.73
359 TestNetworkPlugins/group/kindnet/DNS 0.4
360 TestNetworkPlugins/group/kindnet/Localhost 0.26
361 TestNetworkPlugins/group/kindnet/HairPin 0.44
362 TestNetworkPlugins/group/custom-flannel/Start 56.34
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.42
365 TestNetworkPlugins/group/calico/NetCatPod 12.29
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.31
368 TestNetworkPlugins/group/calico/DNS 0.2
369 TestNetworkPlugins/group/calico/Localhost 0.19
370 TestNetworkPlugins/group/calico/HairPin 0.2
371 TestNetworkPlugins/group/custom-flannel/DNS 0.34
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
374 TestNetworkPlugins/group/enable-default-cni/Start 83.41
375 TestNetworkPlugins/group/flannel/Start 59.85
376 TestNetworkPlugins/group/flannel/ControllerPod 6.01
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
378 TestNetworkPlugins/group/flannel/NetCatPod 9.27
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.29
381 TestNetworkPlugins/group/flannel/DNS 0.26
382 TestNetworkPlugins/group/flannel/Localhost 0.2
383 TestNetworkPlugins/group/flannel/HairPin 0.17
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
387 TestNetworkPlugins/group/bridge/Start 76.27
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
389 TestNetworkPlugins/group/bridge/NetCatPod 10.29
390 TestNetworkPlugins/group/bridge/DNS 0.17
391 TestNetworkPlugins/group/bridge/Localhost 0.15
392 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (6.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-123612 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-123612 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.366568585s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 18:09:26.113262  446783 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0920 18:09:26.113358  446783 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-440039/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-123612
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-123612: exit status 85 (81.189848ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-123612 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC |          |
	|         | -p download-only-123612        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:09:19
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:09:19.789572  446789 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:09:19.789843  446789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:09:19.789866  446789 out.go:358] Setting ErrFile to fd 2...
	I0920 18:09:19.789871  446789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:09:19.790126  446789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-440039/.minikube/bin
	W0920 18:09:19.790275  446789 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19679-440039/.minikube/config/config.json: open /home/jenkins/minikube-integration/19679-440039/.minikube/config/config.json: no such file or directory
	I0920 18:09:19.790745  446789 out.go:352] Setting JSON to true
	I0920 18:09:19.791593  446789 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6711,"bootTime":1726849049,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 18:09:19.791668  446789 start.go:139] virtualization:  
	I0920 18:09:19.794866  446789 out.go:97] [download-only-123612] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0920 18:09:19.795060  446789 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19679-440039/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 18:09:19.795164  446789 notify.go:220] Checking for updates...
	I0920 18:09:19.797306  446789 out.go:169] MINIKUBE_LOCATION=19679
	I0920 18:09:19.800091  446789 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:09:19.802119  446789 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19679-440039/kubeconfig
	I0920 18:09:19.804105  446789 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-440039/.minikube
	I0920 18:09:19.805923  446789 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 18:09:19.809648  446789 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 18:09:19.809899  446789 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:09:19.841193  446789 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:09:19.841315  446789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:09:19.892095  446789 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 18:09:19.881977623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:09:19.892213  446789 docker.go:318] overlay module found
	I0920 18:09:19.894406  446789 out.go:97] Using the docker driver based on user configuration
	I0920 18:09:19.894439  446789 start.go:297] selected driver: docker
	I0920 18:09:19.894446  446789 start.go:901] validating driver "docker" against <nil>
	I0920 18:09:19.894560  446789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:09:19.947547  446789 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 18:09:19.938268077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:09:19.947752  446789 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:09:19.948037  446789 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 18:09:19.948192  446789 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 18:09:19.950523  446789 out.go:169] Using Docker driver with root privileges
	I0920 18:09:19.952280  446789 cni.go:84] Creating CNI manager for ""
	I0920 18:09:19.952343  446789 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 18:09:19.952358  446789 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:09:19.952436  446789 start.go:340] cluster config:
	{Name:download-only-123612 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-123612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:09:19.954401  446789 out.go:97] Starting "download-only-123612" primary control-plane node in "download-only-123612" cluster
	I0920 18:09:19.954420  446789 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0920 18:09:19.956340  446789 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0920 18:09:19.956363  446789 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0920 18:09:19.956535  446789 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 18:09:19.971817  446789 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 18:09:19.972517  446789 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 18:09:19.972664  446789 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 18:09:20.159250  446789 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0920 18:09:20.159279  446789 cache.go:56] Caching tarball of preloaded images
	I0920 18:09:20.159453  446789 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0920 18:09:20.161545  446789 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 18:09:20.161570  446789 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0920 18:09:20.249159  446789 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19679-440039/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0920 18:09:24.353409  446789 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0920 18:09:24.353509  446789 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19679-440039/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0920 18:09:24.525823  446789 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	
	
	* The control-plane node download-only-123612 host does not exist
	  To start a cluster, run: "minikube start -p download-only-123612"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-123612
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-342253 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-342253 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.017526285s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 18:09:32.552210  446783 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I0920 18:09:32.552248  446783 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-440039/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-342253
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-342253: exit status 85 (80.134379ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-123612 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC |                     |
	|         | -p download-only-123612        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| delete  | -p download-only-123612        | download-only-123612 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC | 20 Sep 24 18:09 UTC |
	| start   | -o=json --download-only        | download-only-342253 | jenkins | v1.34.0 | 20 Sep 24 18:09 UTC |                     |
	|         | -p download-only-342253        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:09:26
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:09:26.576726  446989 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:09:26.576870  446989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:09:26.576881  446989 out.go:358] Setting ErrFile to fd 2...
	I0920 18:09:26.576887  446989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:09:26.577141  446989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-440039/.minikube/bin
	I0920 18:09:26.577536  446989 out.go:352] Setting JSON to true
	I0920 18:09:26.578481  446989 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6718,"bootTime":1726849049,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 18:09:26.578555  446989 start.go:139] virtualization:  
	I0920 18:09:26.580848  446989 out.go:97] [download-only-342253] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 18:09:26.581054  446989 notify.go:220] Checking for updates...
	I0920 18:09:26.582972  446989 out.go:169] MINIKUBE_LOCATION=19679
	I0920 18:09:26.584812  446989 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:09:26.586415  446989 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19679-440039/kubeconfig
	I0920 18:09:26.588058  446989 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-440039/.minikube
	I0920 18:09:26.589938  446989 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 18:09:26.593305  446989 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 18:09:26.593591  446989 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:09:26.615473  446989 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:09:26.615601  446989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:09:26.669204  446989 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 18:09:26.659592229 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:09:26.669316  446989 docker.go:318] overlay module found
	I0920 18:09:26.671191  446989 out.go:97] Using the docker driver based on user configuration
	I0920 18:09:26.671233  446989 start.go:297] selected driver: docker
	I0920 18:09:26.671240  446989 start.go:901] validating driver "docker" against <nil>
	I0920 18:09:26.671357  446989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:09:26.726653  446989 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 18:09:26.716541939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:09:26.726849  446989 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:09:26.727127  446989 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 18:09:26.727287  446989 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 18:09:26.729347  446989 out.go:169] Using Docker driver with root privileges
	I0920 18:09:26.731456  446989 cni.go:84] Creating CNI manager for ""
	I0920 18:09:26.731512  446989 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 18:09:26.731523  446989 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:09:26.731608  446989 start.go:340] cluster config:
	{Name:download-only-342253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-342253 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:09:26.733619  446989 out.go:97] Starting "download-only-342253" primary control-plane node in "download-only-342253" cluster
	I0920 18:09:26.733649  446989 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0920 18:09:26.735720  446989 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0920 18:09:26.735755  446989 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 18:09:26.735859  446989 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 18:09:26.751949  446989 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 18:09:26.752079  446989 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 18:09:26.752105  446989 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 18:09:26.752110  446989 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 18:09:26.752122  446989 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 18:09:26.796984  446989 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0920 18:09:26.797012  446989 cache.go:56] Caching tarball of preloaded images
	I0920 18:09:26.797194  446989 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 18:09:26.799429  446989 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0920 18:09:26.799460  446989 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0920 18:09:26.883722  446989 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19679-440039/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0920 18:09:31.007118  446989 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0920 18:09:31.007239  446989 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19679-440039/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-342253 host does not exist
	  To start a cluster, run: "minikube start -p download-only-342253"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-342253
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 18:09:33.796979  446783 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-913794 --alsologtostderr --binary-mirror http://127.0.0.1:43405 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-913794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-913794
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-610387
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-610387: exit status 85 (73.106904ms)

                                                
                                                
-- stdout --
	* Profile "addons-610387" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-610387"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-610387
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-610387: exit status 85 (80.480097ms)

                                                
                                                
-- stdout --
	* Profile "addons-610387" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-610387"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (216.44s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-610387 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-610387 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m36.437825751s)
--- PASS: TestAddons/Setup (216.44s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-610387 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-610387 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.88926ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-qjm6z" [1c16fedd-152a-4247-a39f-773f4b51b9ab] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005621203s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qmjm7" [a27eb90b-d7ec-4ce6-8bc9-84bbee5a6d13] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006992627s
addons_test.go:338: (dbg) Run:  kubectl --context addons-610387 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-610387 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Done: kubectl --context addons-610387 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.034619405s)
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-610387 ip
2024/09/20 18:17:05 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-610387 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.22s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-610387 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-610387 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-610387 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4dc4ed82-d3b2-427c-9dd1-564725efa073] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4dc4ed82-d3b2-427c-9dd1-564725efa073] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004789821s
I0920 18:17:34.493808  446783 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-610387 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-610387 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-610387 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-610387 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-610387 addons disable ingress-dns --alsologtostderr -v=1: (1.565621691s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-610387 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-610387 addons disable ingress --alsologtostderr -v=1: (8.097810092s)
--- PASS: TestAddons/parallel/Ingress (20.47s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.96s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2s8f2" [a67a23b6-7ab5-40a7-8f04-23d6ba2f8e37] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004588964s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-610387
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-610387: (5.957712169s)
--- PASS: TestAddons/parallel/InspektorGadget (10.96s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.349269ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-9wbd9" [3587d5bc-bb15-4e63-b7cd-762e145e267d] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004094889s
addons_test.go:413: (dbg) Run:  kubectl --context addons-610387 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-610387 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 18:17:06.124828  446783 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 18:17:06.131537  446783 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 18:17:06.132030  446783 kapi.go:107] duration metric: took 9.920004ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 10.46941ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-610387 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-610387 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [62b85018-c783-41f5-bd84-203dec21bf7e] Pending
helpers_test.go:344: "task-pv-pod" [62b85018-c783-41f5-bd84-203dec21bf7e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [62b85018-c783-41f5-bd84-203dec21bf7e] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.006145065s
addons_test.go:528: (dbg) Run:  kubectl --context addons-610387 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-610387 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-610387 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-610387 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-610387 delete pod task-pv-pod: (1.227188973s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-610387 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-610387 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-610387 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9ad6ad0b-7f0e-42ac-8f9a-7b72f4c6d83d] Pending
helpers_test.go:344: "task-pv-pod-restore" [9ad6ad0b-7f0e-42ac-8f9a-7b72f4c6d83d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9ad6ad0b-7f0e-42ac-8f9a-7b72f4c6d83d] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004026416s
addons_test.go:570: (dbg) Run:  kubectl --context addons-610387 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-610387 delete pod task-pv-pod-restore: (1.381826511s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-610387 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-610387 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-610387 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-610387 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.908800727s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-610387 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:586: (dbg) Done: out/minikube-linux-arm64 -p addons-610387 addons disable volumesnapshots --alsologtostderr -v=1: (1.161772721s)
--- PASS: TestAddons/parallel/CSI (48.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-610387 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-610387 --alsologtostderr -v=1: (1.003055694s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-2wvtg" [ba767c23-bd36-439c-b8f1-972f41a9250b] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-2wvtg" [ba767c23-bd36-439c-b8f1-972f41a9250b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-2wvtg" [ba767c23-bd36-439c-b8f1-972f41a9250b] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.008504903s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-610387 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-610387 addons disable headlamp --alsologtostderr -v=1: (7.01141913s)
--- PASS: TestAddons/parallel/Headlamp (17.02s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-bpfcl" [e58099c5-f9c4-4b86-8b02-343b007347d3] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005319238s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-610387
--- PASS: TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.18s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-610387 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-610387 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-610387 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [25e46c4e-a798-40ff-ab41-7b1d46c45ae3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [25e46c4e-a798-40ff-ab41-7b1d46c45ae3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [25e46c4e-a798-40ff-ab41-7b1d46c45ae3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003957761s
addons_test.go:938: (dbg) Run:  kubectl --context addons-610387 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-610387 ssh "cat /opt/local-path-provisioner/pvc-0af5477c-2f90-47ee-add2-eb9ac2863a88_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-610387 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-610387 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-610387 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-610387 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.803337354s)
--- PASS: TestAddons/parallel/LocalPath (53.18s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4s278" [7727b064-5967-4908-ac7f-230413845569] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004394163s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-610387
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-fbls6" [52fc7f35-a201-4092-ac18-13feb7e15876] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004355882s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-610387 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-610387 addons disable yakd --alsologtostderr -v=1: (5.823163297s)
--- PASS: TestAddons/parallel/Yakd (11.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-610387
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-610387: (12.076505089s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-610387
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-610387
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-610387
--- PASS: TestAddons/StoppedEnableDisable (12.35s)

                                                
                                    
x
+
TestCertOptions (37.66s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-257492 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-257492 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.95559168s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-257492 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-257492 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-257492 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-257492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-257492
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-257492: (2.026866477s)
--- PASS: TestCertOptions (37.66s)

                                                
                                    
x
+
TestCertExpiration (226.72s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-735719 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E0920 18:56:14.029404  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-735719 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (36.813094321s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-735719 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-735719 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.567091666s)
helpers_test.go:175: Cleaning up "cert-expiration-735719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-735719
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-735719: (2.336187241s)
--- PASS: TestCertExpiration (226.72s)

                                                
                                    
x
+
TestForceSystemdFlag (36.57s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-226210 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-226210 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.517555923s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-226210 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-226210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-226210
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-226210: (2.465084783s)
--- PASS: TestForceSystemdFlag (36.57s)

                                                
                                    
x
+
TestForceSystemdEnv (43.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-042522 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-042522 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.060852608s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-042522 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-042522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-042522
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-042522: (2.484953781s)
--- PASS: TestForceSystemdEnv (43.03s)

                                                
                                    
x
+
TestDockerEnvContainerd (47.86s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-866180 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-866180 --driver=docker  --container-runtime=containerd: (32.055985825s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-866180"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-866180": (1.038312529s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-iGQkQE2XBbHM/agent.466240" SSH_AGENT_PID="466241" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-iGQkQE2XBbHM/agent.466240" SSH_AGENT_PID="466241" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-iGQkQE2XBbHM/agent.466240" SSH_AGENT_PID="466241" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.321531539s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-iGQkQE2XBbHM/agent.466240" SSH_AGENT_PID="466241" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-866180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-866180
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-866180: (2.069927472s)
--- PASS: TestDockerEnvContainerd (47.86s)

                                                
                                    
x
+
TestErrorSpam/setup (30.14s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-575716 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-575716 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-575716 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-575716 --driver=docker  --container-runtime=containerd: (30.139469614s)
--- PASS: TestErrorSpam/setup (30.14s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-575716 --log_dir /tmp/nospam-575716 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-575716 --log_dir /tmp/nospam-575716 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-575716 --log_dir /tmp/nospam-575716 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-575716 --log_dir /tmp/nospam-575716 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-575716 --log_dir /tmp/nospam-575716 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-575716 --log_dir /tmp/nospam-575716 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-575716 --log_dir /tmp/nospam-575716 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-575716 --log_dir /tmp/nospam-575716 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-575716 --log_dir /tmp/nospam-575716 pause
--- PASS: TestErrorSpam/pause (1.82s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-575716 --log_dir /tmp/nospam-575716 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-575716 --log_dir /tmp/nospam-575716 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-575716 --log_dir /tmp/nospam-575716 unpause
--- PASS: TestErrorSpam/unpause (1.91s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-575716 --log_dir /tmp/nospam-575716 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-575716 --log_dir /tmp/nospam-575716 stop: (1.28748314s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-575716 --log_dir /tmp/nospam-575716 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-575716 --log_dir /tmp/nospam-575716 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19679-440039/.minikube/files/etc/test/nested/copy/446783/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-252518 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-252518 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (48.164400741s)
--- PASS: TestFunctional/serial/StartWithProxy (48.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.37s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 18:21:15.818770  446783 config.go:182] Loaded profile config "functional-252518": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-252518 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-252518 --alsologtostderr -v=8: (6.363493792s)
functional_test.go:663: soft start took 6.371222628s for "functional-252518" cluster.
I0920 18:21:22.184403  446783 config.go:182] Loaded profile config "functional-252518": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (6.37s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-252518 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-252518 cache add registry.k8s.io/pause:3.1: (1.447009257s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-252518 cache add registry.k8s.io/pause:3.3: (1.334145035s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-252518 cache add registry.k8s.io/pause:latest: (1.238097317s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-252518 /tmp/TestFunctionalserialCacheCmdcacheadd_local2124857003/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 cache add minikube-local-cache-test:functional-252518
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 cache delete minikube-local-cache-test:functional-252518
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-252518
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252518 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (308.528502ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-252518 cache reload: (1.125768345s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 kubectl -- --context functional-252518 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-252518 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.31s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-252518 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-252518 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.30472227s)
functional_test.go:761: restart took 38.304875207s for "functional-252518" cluster.
I0920 18:22:08.912113  446783 config.go:182] Loaded profile config "functional-252518": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (38.31s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-252518 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-252518 logs: (1.745796211s)
--- PASS: TestFunctional/serial/LogsCmd (1.75s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 logs --file /tmp/TestFunctionalserialLogsFileCmd734341169/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-252518 logs --file /tmp/TestFunctionalserialLogsFileCmd734341169/001/logs.txt: (1.720771104s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.72s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-252518 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-252518
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-252518: exit status 115 (431.983413ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31283 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-252518 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.40s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252518 config get cpus: exit status 14 (70.65469ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252518 config get cpus: exit status 14 (63.489093ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-252518 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-252518 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 483301: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.72s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-252518 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-252518 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (257.175812ms)

                                                
                                                
-- stdout --
	* [functional-252518] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-440039/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-440039/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:22:58.417296  482861 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:22:58.417567  482861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:22:58.417599  482861 out.go:358] Setting ErrFile to fd 2...
	I0920 18:22:58.417622  482861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:22:58.417890  482861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-440039/.minikube/bin
	I0920 18:22:58.418333  482861 out.go:352] Setting JSON to false
	I0920 18:22:58.419387  482861 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7530,"bootTime":1726849049,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 18:22:58.419488  482861 start.go:139] virtualization:  
	I0920 18:22:58.422276  482861 out.go:177] * [functional-252518] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 18:22:58.425977  482861 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:22:58.426042  482861 notify.go:220] Checking for updates...
	I0920 18:22:58.435150  482861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:22:58.437588  482861 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-440039/kubeconfig
	I0920 18:22:58.440266  482861 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-440039/.minikube
	I0920 18:22:58.442756  482861 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 18:22:58.444750  482861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:22:58.447185  482861 config.go:182] Loaded profile config "functional-252518": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 18:22:58.447694  482861 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:22:58.494475  482861 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:22:58.494605  482861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:22:58.576945  482861 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 18:22:58.566877344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:22:58.577098  482861 docker.go:318] overlay module found
	I0920 18:22:58.579436  482861 out.go:177] * Using the docker driver based on existing profile
	I0920 18:22:58.581469  482861 start.go:297] selected driver: docker
	I0920 18:22:58.581517  482861 start.go:901] validating driver "docker" against &{Name:functional-252518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-252518 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:22:58.581640  482861 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:22:58.584199  482861 out.go:201] 
	W0920 18:22:58.586273  482861 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 18:22:58.588112  482861 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-252518 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-252518 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-252518 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (300.52113ms)

                                                
                                                
-- stdout --
	* [functional-252518] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-440039/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-440039/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:22:58.450137  482866 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:22:58.450460  482866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:22:58.450492  482866 out.go:358] Setting ErrFile to fd 2...
	I0920 18:22:58.450512  482866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:22:58.451574  482866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-440039/.minikube/bin
	I0920 18:22:58.452019  482866 out.go:352] Setting JSON to false
	I0920 18:22:58.453077  482866 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7530,"bootTime":1726849049,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 18:22:58.453192  482866 start.go:139] virtualization:  
	I0920 18:22:58.455814  482866 out.go:177] * [functional-252518] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0920 18:22:58.458378  482866 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:22:58.459247  482866 notify.go:220] Checking for updates...
	I0920 18:22:58.462356  482866 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:22:58.464231  482866 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-440039/kubeconfig
	I0920 18:22:58.466047  482866 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-440039/.minikube
	I0920 18:22:58.467853  482866 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 18:22:58.469627  482866 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:22:58.475257  482866 config.go:182] Loaded profile config "functional-252518": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 18:22:58.479112  482866 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:22:58.539485  482866 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:22:58.539623  482866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:22:58.653545  482866 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 18:22:58.640618582 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:22:58.653658  482866 docker.go:318] overlay module found
	I0920 18:22:58.655884  482866 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0920 18:22:58.659202  482866 start.go:297] selected driver: docker
	I0920 18:22:58.659229  482866 start.go:901] validating driver "docker" against &{Name:functional-252518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-252518 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:22:58.659325  482866 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:22:58.661906  482866 out.go:201] 
	W0920 18:22:58.664069  482866 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 18:22:58.665761  482866 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-252518 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-252518 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-6mn28" [30f045f1-ef9c-45e1-ba3e-70b4eac60885] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-6mn28" [30f045f1-ef9c-45e1-ba3e-70b4eac60885] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003667505s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30445
functional_test.go:1675: http://192.168.49.2:30445: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-6mn28

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30445
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6715be1a-834e-4b49-b3f0-f062911a8945] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003766206s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-252518 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-252518 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-252518 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-252518 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2b24a968-c96d-47ca-90fc-b20296ccbc06] Pending
helpers_test.go:344: "sp-pod" [2b24a968-c96d-47ca-90fc-b20296ccbc06] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2b24a968-c96d-47ca-90fc-b20296ccbc06] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.005691962s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-252518 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-252518 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-252518 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bc17f98e-1ee0-43fe-947f-e50b9633f830] Pending
helpers_test.go:344: "sp-pod" [bc17f98e-1ee0-43fe-947f-e50b9633f830] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004775643s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-252518 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.78s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh -n functional-252518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 cp functional-252518:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd321390653/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh -n functional-252518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh -n functional-252518 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/446783/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "sudo cat /etc/test/nested/copy/446783/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/446783.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "sudo cat /etc/ssl/certs/446783.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/446783.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "sudo cat /usr/share/ca-certificates/446783.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/4467832.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "sudo cat /etc/ssl/certs/4467832.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/4467832.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "sudo cat /usr/share/ca-certificates/4467832.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-252518 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252518 ssh "sudo systemctl is-active docker": exit status 1 (384.641386ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252518 ssh "sudo systemctl is-active crio": exit status 1 (376.026542ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-252518 version -o=json --components: (1.552581771s)
--- PASS: TestFunctional/parallel/Version/components (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-252518 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-252518
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-252518
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-252518 image ls --format short --alsologtostderr:
I0920 18:23:01.369086  483467 out.go:345] Setting OutFile to fd 1 ...
I0920 18:23:01.370364  483467 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:23:01.370404  483467 out.go:358] Setting ErrFile to fd 2...
I0920 18:23:01.370430  483467 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:23:01.370715  483467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-440039/.minikube/bin
I0920 18:23:01.371429  483467 config.go:182] Loaded profile config "functional-252518": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 18:23:01.371595  483467 config.go:182] Loaded profile config "functional-252518": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 18:23:01.372107  483467 cli_runner.go:164] Run: docker container inspect functional-252518 --format={{.State.Status}}
I0920 18:23:01.392826  483467 ssh_runner.go:195] Run: systemctl --version
I0920 18:23:01.392884  483467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252518
I0920 18:23:01.413114  483467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/functional-252518/id_rsa Username:docker}
I0920 18:23:01.519139  483467 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-252518 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| docker.io/library/minikube-local-cache-test | functional-252518  | sha256:4ff5df | 991B   |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| localhost/my-image                          | functional-252518  | sha256:c0d3f1 | 831kB  |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/kicbase/echo-server               | functional-252518  | sha256:ce2d2c | 2.17MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-252518 image ls --format table --alsologtostderr:
I0920 18:23:06.154050  483835 out.go:345] Setting OutFile to fd 1 ...
I0920 18:23:06.154199  483835 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:23:06.154206  483835 out.go:358] Setting ErrFile to fd 2...
I0920 18:23:06.154212  483835 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:23:06.154485  483835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-440039/.minikube/bin
I0920 18:23:06.155133  483835 config.go:182] Loaded profile config "functional-252518": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 18:23:06.155247  483835 config.go:182] Loaded profile config "functional-252518": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 18:23:06.155705  483835 cli_runner.go:164] Run: docker container inspect functional-252518 --format={{.State.Status}}
I0920 18:23:06.175648  483835 ssh_runner.go:195] Run: systemctl --version
I0920 18:23:06.175746  483835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252518
I0920 18:23:06.195301  483835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/functional-252518/id_rsa Username:docker}
I0920 18:23:06.295506  483835 ssh_runner.go:195] Run: sudo crictl images --output json
2024/09/20 18:23:08 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-252518 image ls --format json --alsologtostderr:
[{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:c0d3f134c41779bd597043e220d7282844669e7ba402d9aa02958d46825fa9f1","repoDigests":[],"repoTags":["localhost/my-image:functional-252518"],"size":"830615"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9
c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8
s.io/pause:3.3"],"size":"249461"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-252518"],"size":"2173567"},{"id":"sha256:4ff5df0fa257bf5a7aa8c330e76f20517edff20d721cd25fbaa0dde39955f7f0","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-252518"],"size":"991"},{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4"
,"repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:279f381cb373
65bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-252518 image ls --format json --alsologtostderr:
I0920 18:23:05.872917  483802 out.go:345] Setting OutFile to fd 1 ...
I0920 18:23:05.873021  483802 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:23:05.873026  483802 out.go:358] Setting ErrFile to fd 2...
I0920 18:23:05.873031  483802 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:23:05.873288  483802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-440039/.minikube/bin
I0920 18:23:05.873923  483802 config.go:182] Loaded profile config "functional-252518": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 18:23:05.874034  483802 config.go:182] Loaded profile config "functional-252518": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 18:23:05.874696  483802 cli_runner.go:164] Run: docker container inspect functional-252518 --format={{.State.Status}}
I0920 18:23:05.908566  483802 ssh_runner.go:195] Run: systemctl --version
I0920 18:23:05.908624  483802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252518
I0920 18:23:05.934407  483802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/functional-252518/id_rsa Username:docker}
I0920 18:23:06.035203  483802 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-252518 image ls --format yaml --alsologtostderr:
- id: sha256:4ff5df0fa257bf5a7aa8c330e76f20517edff20d721cd25fbaa0dde39955f7f0
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-252518
size: "991"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-252518
size: "2173567"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-252518 image ls --format yaml --alsologtostderr:
I0920 18:23:01.634161  483528 out.go:345] Setting OutFile to fd 1 ...
I0920 18:23:01.634380  483528 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:23:01.634432  483528 out.go:358] Setting ErrFile to fd 2...
I0920 18:23:01.634454  483528 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:23:01.634780  483528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-440039/.minikube/bin
I0920 18:23:01.635474  483528 config.go:182] Loaded profile config "functional-252518": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 18:23:01.635643  483528 config.go:182] Loaded profile config "functional-252518": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 18:23:01.636204  483528 cli_runner.go:164] Run: docker container inspect functional-252518 --format={{.State.Status}}
I0920 18:23:01.655372  483528 ssh_runner.go:195] Run: systemctl --version
I0920 18:23:01.655426  483528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252518
I0920 18:23:01.676107  483528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/functional-252518/id_rsa Username:docker}
I0920 18:23:01.777537  483528 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252518 ssh pgrep buildkitd: exit status 1 (329.182879ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image build -t localhost/my-image:functional-252518 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-252518 image build -t localhost/my-image:functional-252518 testdata/build --alsologtostderr: (3.353386805s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-252518 image build -t localhost/my-image:functional-252518 testdata/build --alsologtostderr:
I0920 18:23:02.210607  483621 out.go:345] Setting OutFile to fd 1 ...
I0920 18:23:02.211406  483621 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:23:02.211449  483621 out.go:358] Setting ErrFile to fd 2...
I0920 18:23:02.211469  483621 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:23:02.211872  483621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-440039/.minikube/bin
I0920 18:23:02.212719  483621 config.go:182] Loaded profile config "functional-252518": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 18:23:02.213375  483621 config.go:182] Loaded profile config "functional-252518": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 18:23:02.213946  483621 cli_runner.go:164] Run: docker container inspect functional-252518 --format={{.State.Status}}
I0920 18:23:02.232079  483621 ssh_runner.go:195] Run: systemctl --version
I0920 18:23:02.232138  483621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252518
I0920 18:23:02.250905  483621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/functional-252518/id_rsa Username:docker}
I0920 18:23:02.355125  483621 build_images.go:161] Building image from path: /tmp/build.1472521018.tar
I0920 18:23:02.355196  483621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 18:23:02.365650  483621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1472521018.tar
I0920 18:23:02.369438  483621 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1472521018.tar: stat -c "%s %y" /var/lib/minikube/build/build.1472521018.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1472521018.tar': No such file or directory
I0920 18:23:02.369483  483621 ssh_runner.go:362] scp /tmp/build.1472521018.tar --> /var/lib/minikube/build/build.1472521018.tar (3072 bytes)
I0920 18:23:02.396090  483621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1472521018
I0920 18:23:02.405091  483621 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1472521018 -xf /var/lib/minikube/build/build.1472521018.tar
I0920 18:23:02.417124  483621 containerd.go:394] Building image: /var/lib/minikube/build/build.1472521018
I0920 18:23:02.417216  483621 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1472521018 --local dockerfile=/var/lib/minikube/build/build.1472521018 --output type=image,name=localhost/my-image:functional-252518
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:80f8fed14744121ac44c79a2815ffc492b985e70fd74cee5e9d3ce721d23e3c4
#8 exporting manifest sha256:80f8fed14744121ac44c79a2815ffc492b985e70fd74cee5e9d3ce721d23e3c4 0.0s done
#8 exporting config sha256:c0d3f134c41779bd597043e220d7282844669e7ba402d9aa02958d46825fa9f1 0.0s done
#8 naming to localhost/my-image:functional-252518 done
#8 DONE 0.2s
I0920 18:23:05.487653  483621 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1472521018 --local dockerfile=/var/lib/minikube/build/build.1472521018 --output type=image,name=localhost/my-image:functional-252518: (3.070403344s)
I0920 18:23:05.487730  483621 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1472521018
I0920 18:23:05.499067  483621 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1472521018.tar
I0920 18:23:05.509278  483621 build_images.go:217] Built localhost/my-image:functional-252518 from /tmp/build.1472521018.tar
I0920 18:23:05.509307  483621 build_images.go:133] succeeded building to: functional-252518
I0920 18:23:05.509312  483621 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-252518
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image load --daemon kicbase/echo-server:functional-252518 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-252518 image load --daemon kicbase/echo-server:functional-252518 --alsologtostderr: (1.224962705s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image load --daemon kicbase/echo-server:functional-252518 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-252518 image load --daemon kicbase/echo-server:functional-252518 --alsologtostderr: (1.196261366s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-252518 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-252518 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-v4vq6" [b299e069-79fa-49c2-95a0-1b9e1eec99da] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-v4vq6" [b299e069-79fa-49c2-95a0-1b9e1eec99da] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004243004s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-252518
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image load --daemon kicbase/echo-server:functional-252518 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image save kicbase/echo-server:functional-252518 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image rm kicbase/echo-server:functional-252518 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-252518
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 image save --daemon kicbase/echo-server:functional-252518 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-252518
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-252518 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-252518 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-252518 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-252518 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 479678: os: process already finished
helpers_test.go:502: unable to terminate pid 479557: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-252518 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-252518 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [46121445-1b61-47c0-9cea-667c6b61a7b6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [46121445-1b61-47c0-9cea-667c6b61a7b6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003817858s
I0920 18:22:36.215980  446783 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 service list -o json
functional_test.go:1494: Took "342.400805ms" to run "out/minikube-linux-arm64 -p functional-252518 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31989
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31989
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-252518 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.15.11 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-252518 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "358.087411ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "55.994188ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "369.387181ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "52.48211ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-252518 /tmp/TestFunctionalparallelMountCmdany-port4213165601/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726856566921905728" to /tmp/TestFunctionalparallelMountCmdany-port4213165601/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726856566921905728" to /tmp/TestFunctionalparallelMountCmdany-port4213165601/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726856566921905728" to /tmp/TestFunctionalparallelMountCmdany-port4213165601/001/test-1726856566921905728
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252518 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (359.020525ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:22:47.281285  446783 retry.go:31] will retry after 540.574404ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 18:22 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 18:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 18:22 test-1726856566921905728
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh cat /mount-9p/test-1726856566921905728
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-252518 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [fe803897-1cb7-4909-b107-e70a241b6aec] Pending
helpers_test.go:344: "busybox-mount" [fe803897-1cb7-4909-b107-e70a241b6aec] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [fe803897-1cb7-4909-b107-e70a241b6aec] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [fe803897-1cb7-4909-b107-e70a241b6aec] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004008822s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-252518 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-252518 /tmp/TestFunctionalparallelMountCmdany-port4213165601/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-252518 /tmp/TestFunctionalparallelMountCmdspecific-port4172129104/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252518 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (362.807929ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:22:55.232935  446783 retry.go:31] will retry after 285.9863ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-252518 /tmp/TestFunctionalparallelMountCmdspecific-port4172129104/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252518 ssh "sudo umount -f /mount-9p": exit status 1 (277.631108ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-252518 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-252518 /tmp/TestFunctionalparallelMountCmdspecific-port4172129104/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-252518 /tmp/TestFunctionalparallelMountCmdVerifyCleanup828918464/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-252518 /tmp/TestFunctionalparallelMountCmdVerifyCleanup828918464/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-252518 /tmp/TestFunctionalparallelMountCmdVerifyCleanup828918464/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252518 ssh "findmnt -T" /mount1: exit status 1 (511.02309ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:22:57.079635  446783 retry.go:31] will retry after 341.720521ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-252518 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-252518 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-252518 /tmp/TestFunctionalparallelMountCmdVerifyCleanup828918464/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-252518 /tmp/TestFunctionalparallelMountCmdVerifyCleanup828918464/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-252518 /tmp/TestFunctionalparallelMountCmdVerifyCleanup828918464/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-252518
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-252518
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-252518
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (118.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-426061 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0920 18:23:11.286419  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:23:11.608531  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:23:12.250087  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:23:13.531454  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:23:16.092907  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:23:21.214972  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:23:31.456989  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:23:51.938760  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:24:32.900440  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-426061 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m57.583902697s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (118.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (32.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-426061 -- rollout status deployment/busybox: (29.524868676s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- exec busybox-7dff88458-r56jm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- exec busybox-7dff88458-w6654 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- exec busybox-7dff88458-wl9rp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- exec busybox-7dff88458-r56jm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- exec busybox-7dff88458-w6654 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- exec busybox-7dff88458-wl9rp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- exec busybox-7dff88458-r56jm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- exec busybox-7dff88458-w6654 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- exec busybox-7dff88458-wl9rp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (32.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- exec busybox-7dff88458-r56jm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- exec busybox-7dff88458-r56jm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- exec busybox-7dff88458-w6654 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- exec busybox-7dff88458-w6654 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- exec busybox-7dff88458-wl9rp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426061 -- exec busybox-7dff88458-wl9rp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-426061 -v=7 --alsologtostderr
E0920 18:25:54.821729  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-426061 -v=7 --alsologtostderr: (24.293114646s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-426061 status -v=7 --alsologtostderr: (1.055695004s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-426061 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.111378258s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-426061 status --output json -v=7 --alsologtostderr: (1.039988246s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp testdata/cp-test.txt ha-426061:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp ha-426061:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3392371427/001/cp-test_ha-426061.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp ha-426061:/home/docker/cp-test.txt ha-426061-m02:/home/docker/cp-test_ha-426061_ha-426061-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m02 "sudo cat /home/docker/cp-test_ha-426061_ha-426061-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp ha-426061:/home/docker/cp-test.txt ha-426061-m03:/home/docker/cp-test_ha-426061_ha-426061-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m03 "sudo cat /home/docker/cp-test_ha-426061_ha-426061-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp ha-426061:/home/docker/cp-test.txt ha-426061-m04:/home/docker/cp-test_ha-426061_ha-426061-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m04 "sudo cat /home/docker/cp-test_ha-426061_ha-426061-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp testdata/cp-test.txt ha-426061-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp ha-426061-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3392371427/001/cp-test_ha-426061-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp ha-426061-m02:/home/docker/cp-test.txt ha-426061:/home/docker/cp-test_ha-426061-m02_ha-426061.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061 "sudo cat /home/docker/cp-test_ha-426061-m02_ha-426061.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp ha-426061-m02:/home/docker/cp-test.txt ha-426061-m03:/home/docker/cp-test_ha-426061-m02_ha-426061-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m03 "sudo cat /home/docker/cp-test_ha-426061-m02_ha-426061-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp ha-426061-m02:/home/docker/cp-test.txt ha-426061-m04:/home/docker/cp-test_ha-426061-m02_ha-426061-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m04 "sudo cat /home/docker/cp-test_ha-426061-m02_ha-426061-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp testdata/cp-test.txt ha-426061-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp ha-426061-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3392371427/001/cp-test_ha-426061-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp ha-426061-m03:/home/docker/cp-test.txt ha-426061:/home/docker/cp-test_ha-426061-m03_ha-426061.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061 "sudo cat /home/docker/cp-test_ha-426061-m03_ha-426061.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp ha-426061-m03:/home/docker/cp-test.txt ha-426061-m02:/home/docker/cp-test_ha-426061-m03_ha-426061-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m02 "sudo cat /home/docker/cp-test_ha-426061-m03_ha-426061-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp ha-426061-m03:/home/docker/cp-test.txt ha-426061-m04:/home/docker/cp-test_ha-426061-m03_ha-426061-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m04 "sudo cat /home/docker/cp-test_ha-426061-m03_ha-426061-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp testdata/cp-test.txt ha-426061-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp ha-426061-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3392371427/001/cp-test_ha-426061-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp ha-426061-m04:/home/docker/cp-test.txt ha-426061:/home/docker/cp-test_ha-426061-m04_ha-426061.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061 "sudo cat /home/docker/cp-test_ha-426061-m04_ha-426061.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp ha-426061-m04:/home/docker/cp-test.txt ha-426061-m02:/home/docker/cp-test_ha-426061-m04_ha-426061-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m02 "sudo cat /home/docker/cp-test_ha-426061-m04_ha-426061-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 cp ha-426061-m04:/home/docker/cp-test.txt ha-426061-m03:/home/docker/cp-test_ha-426061-m04_ha-426061-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 ssh -n ha-426061-m03 "sudo cat /home/docker/cp-test_ha-426061-m04_ha-426061-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-426061 node stop m02 -v=7 --alsologtostderr: (12.157393097s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-426061 status -v=7 --alsologtostderr: exit status 7 (859.487149ms)

                                                
                                                
-- stdout --
	ha-426061
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-426061-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-426061-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-426061-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:26:42.123735  499880 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:26:42.123971  499880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:26:42.123984  499880 out.go:358] Setting ErrFile to fd 2...
	I0920 18:26:42.123989  499880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:26:42.124336  499880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-440039/.minikube/bin
	I0920 18:26:42.124708  499880 out.go:352] Setting JSON to false
	I0920 18:26:42.124778  499880 mustload.go:65] Loading cluster: ha-426061
	I0920 18:26:42.124865  499880 notify.go:220] Checking for updates...
	I0920 18:26:42.126658  499880 config.go:182] Loaded profile config "ha-426061": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 18:26:42.126702  499880 status.go:174] checking status of ha-426061 ...
	I0920 18:26:42.128234  499880 cli_runner.go:164] Run: docker container inspect ha-426061 --format={{.State.Status}}
	I0920 18:26:42.162626  499880 status.go:364] ha-426061 host status = "Running" (err=<nil>)
	I0920 18:26:42.162686  499880 host.go:66] Checking if "ha-426061" exists ...
	I0920 18:26:42.163073  499880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-426061
	I0920 18:26:42.193222  499880 host.go:66] Checking if "ha-426061" exists ...
	I0920 18:26:42.193613  499880 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:26:42.193674  499880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-426061
	I0920 18:26:42.234143  499880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/ha-426061/id_rsa Username:docker}
	I0920 18:26:42.344852  499880 ssh_runner.go:195] Run: systemctl --version
	I0920 18:26:42.350227  499880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:26:42.364841  499880 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:26:42.454620  499880 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-20 18:26:42.443027873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:26:42.455318  499880 kubeconfig.go:125] found "ha-426061" server: "https://192.168.49.254:8443"
	I0920 18:26:42.455370  499880 api_server.go:166] Checking apiserver status ...
	I0920 18:26:42.455429  499880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:26:42.468578  499880 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup
	I0920 18:26:42.479849  499880 api_server.go:182] apiserver freezer: "5:freezer:/docker/5d17e25c94a0c3af6e07bcb22506345d6679f2ffba99f4fa467cd1cb4528ddcf/kubepods/burstable/pod23d88a356e59fdf101cdd01362483829/792d87ce3373b7a82b7223eac2916ee9edcf08c5beeab0793342a5d2c372e787"
	I0920 18:26:42.479937  499880 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5d17e25c94a0c3af6e07bcb22506345d6679f2ffba99f4fa467cd1cb4528ddcf/kubepods/burstable/pod23d88a356e59fdf101cdd01362483829/792d87ce3373b7a82b7223eac2916ee9edcf08c5beeab0793342a5d2c372e787/freezer.state
	I0920 18:26:42.490890  499880 api_server.go:204] freezer state: "THAWED"
	I0920 18:26:42.490922  499880 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 18:26:42.500542  499880 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 18:26:42.500577  499880 status.go:456] ha-426061 apiserver status = Running (err=<nil>)
	I0920 18:26:42.500589  499880 status.go:176] ha-426061 status: &{Name:ha-426061 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:26:42.500629  499880 status.go:174] checking status of ha-426061-m02 ...
	I0920 18:26:42.500963  499880 cli_runner.go:164] Run: docker container inspect ha-426061-m02 --format={{.State.Status}}
	I0920 18:26:42.520115  499880 status.go:364] ha-426061-m02 host status = "Stopped" (err=<nil>)
	I0920 18:26:42.520192  499880 status.go:377] host is not running, skipping remaining checks
	I0920 18:26:42.520204  499880 status.go:176] ha-426061-m02 status: &{Name:ha-426061-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:26:42.520231  499880 status.go:174] checking status of ha-426061-m03 ...
	I0920 18:26:42.520597  499880 cli_runner.go:164] Run: docker container inspect ha-426061-m03 --format={{.State.Status}}
	I0920 18:26:42.539004  499880 status.go:364] ha-426061-m03 host status = "Running" (err=<nil>)
	I0920 18:26:42.539031  499880 host.go:66] Checking if "ha-426061-m03" exists ...
	I0920 18:26:42.539341  499880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-426061-m03
	I0920 18:26:42.556809  499880 host.go:66] Checking if "ha-426061-m03" exists ...
	I0920 18:26:42.557125  499880 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:26:42.557172  499880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-426061-m03
	I0920 18:26:42.574669  499880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/ha-426061-m03/id_rsa Username:docker}
	I0920 18:26:42.680328  499880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:26:42.692875  499880 kubeconfig.go:125] found "ha-426061" server: "https://192.168.49.254:8443"
	I0920 18:26:42.692951  499880 api_server.go:166] Checking apiserver status ...
	I0920 18:26:42.693010  499880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:26:42.704827  499880 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1349/cgroup
	I0920 18:26:42.714555  499880 api_server.go:182] apiserver freezer: "5:freezer:/docker/f85f4a505e76fcb2999bcada05b9d47f7ad67cb267b9543bbfa49896e2484cef/kubepods/burstable/podac358d135d6b6cfb94479b677e22c774/1937c09ce3481074223f37199ab639b240861229f87807572c2deab0d69943ed"
	I0920 18:26:42.714630  499880 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f85f4a505e76fcb2999bcada05b9d47f7ad67cb267b9543bbfa49896e2484cef/kubepods/burstable/podac358d135d6b6cfb94479b677e22c774/1937c09ce3481074223f37199ab639b240861229f87807572c2deab0d69943ed/freezer.state
	I0920 18:26:42.723705  499880 api_server.go:204] freezer state: "THAWED"
	I0920 18:26:42.723739  499880 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 18:26:42.731568  499880 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 18:26:42.731601  499880 status.go:456] ha-426061-m03 apiserver status = Running (err=<nil>)
	I0920 18:26:42.731611  499880 status.go:176] ha-426061-m03 status: &{Name:ha-426061-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:26:42.731636  499880 status.go:174] checking status of ha-426061-m04 ...
	I0920 18:26:42.731964  499880 cli_runner.go:164] Run: docker container inspect ha-426061-m04 --format={{.State.Status}}
	I0920 18:26:42.749096  499880 status.go:364] ha-426061-m04 host status = "Running" (err=<nil>)
	I0920 18:26:42.749122  499880 host.go:66] Checking if "ha-426061-m04" exists ...
	I0920 18:26:42.749427  499880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-426061-m04
	I0920 18:26:42.767645  499880 host.go:66] Checking if "ha-426061-m04" exists ...
	I0920 18:26:42.767950  499880 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:26:42.768001  499880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-426061-m04
	I0920 18:26:42.797001  499880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/ha-426061-m04/id_rsa Username:docker}
	I0920 18:26:42.895424  499880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:26:42.907675  499880 status.go:176] ha-426061-m04 status: &{Name:ha-426061-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-426061 node start m02 -v=7 --alsologtostderr: (18.590715451s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-426061 status -v=7 --alsologtostderr: (1.187230947s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.557987669s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (126.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-426061 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-426061 -v=7 --alsologtostderr
E0920 18:27:21.726670  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:27:21.733080  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:27:21.744614  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:27:21.766075  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:27:21.807622  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:27:21.889135  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:27:22.050496  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:27:22.372193  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:27:23.014379  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:27:24.295761  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:27:26.858459  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:27:31.981096  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:27:42.222530  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-426061 -v=7 --alsologtostderr: (37.332457558s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-426061 --wait=true -v=7 --alsologtostderr
E0920 18:28:02.704440  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:28:10.959140  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:28:38.663853  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:28:43.666580  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-426061 --wait=true -v=7 --alsologtostderr: (1m28.956272923s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-426061
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (126.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-426061 node delete m03 -v=7 --alsologtostderr: (9.924321506s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-426061 stop -v=7 --alsologtostderr: (35.965597492s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-426061 status -v=7 --alsologtostderr: exit status 7 (108.813941ms)

                                                
                                                
-- stdout --
	ha-426061
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-426061-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-426061-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:29:59.307597  514276 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:29:59.307803  514276 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:29:59.307836  514276 out.go:358] Setting ErrFile to fd 2...
	I0920 18:29:59.307856  514276 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:29:59.308137  514276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-440039/.minikube/bin
	I0920 18:29:59.308349  514276 out.go:352] Setting JSON to false
	I0920 18:29:59.308407  514276 mustload.go:65] Loading cluster: ha-426061
	I0920 18:29:59.308479  514276 notify.go:220] Checking for updates...
	I0920 18:29:59.308906  514276 config.go:182] Loaded profile config "ha-426061": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 18:29:59.308944  514276 status.go:174] checking status of ha-426061 ...
	I0920 18:29:59.309520  514276 cli_runner.go:164] Run: docker container inspect ha-426061 --format={{.State.Status}}
	I0920 18:29:59.328582  514276 status.go:364] ha-426061 host status = "Stopped" (err=<nil>)
	I0920 18:29:59.328604  514276 status.go:377] host is not running, skipping remaining checks
	I0920 18:29:59.328611  514276 status.go:176] ha-426061 status: &{Name:ha-426061 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:29:59.328634  514276 status.go:174] checking status of ha-426061-m02 ...
	I0920 18:29:59.328960  514276 cli_runner.go:164] Run: docker container inspect ha-426061-m02 --format={{.State.Status}}
	I0920 18:29:59.347837  514276 status.go:364] ha-426061-m02 host status = "Stopped" (err=<nil>)
	I0920 18:29:59.347860  514276 status.go:377] host is not running, skipping remaining checks
	I0920 18:29:59.347867  514276 status.go:176] ha-426061-m02 status: &{Name:ha-426061-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:29:59.347887  514276 status.go:174] checking status of ha-426061-m04 ...
	I0920 18:29:59.348208  514276 cli_runner.go:164] Run: docker container inspect ha-426061-m04 --format={{.State.Status}}
	I0920 18:29:59.369171  514276 status.go:364] ha-426061-m04 host status = "Stopped" (err=<nil>)
	I0920 18:29:59.369193  514276 status.go:377] host is not running, skipping remaining checks
	I0920 18:29:59.369273  514276 status.go:176] ha-426061-m04 status: &{Name:ha-426061-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (77.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-426061 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0920 18:30:05.588286  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-426061 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m16.506005373s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (77.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-426061 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-426061 --control-plane -v=7 --alsologtostderr: (40.77963566s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-426061 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-426061 status -v=7 --alsologtostderr: (1.054690053s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.600470683s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.60s)

                                                
                                    
x
+
TestJSONOutput/start/Command (85.95s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-500473 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0920 18:32:21.726130  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:32:49.430064  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:33:10.958805  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-500473 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m25.942960103s)
--- PASS: TestJSONOutput/start/Command (85.95s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-500473 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-500473 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-500473 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-500473 --output=json --user=testUser: (5.856475876s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-401711 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-401711 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (82.714615ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7adfd9e8-f0d4-484e-9a46-756c36744584","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-401711] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"48bc0e5e-553a-497b-b8fb-17aa3b57b7ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19679"}}
	{"specversion":"1.0","id":"65782292-8757-496c-b488-8acf326db363","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ec6b14b9-ca29-49a8-a89d-e15264c235ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19679-440039/kubeconfig"}}
	{"specversion":"1.0","id":"3d679d0f-d99f-4f45-bca4-a353094e686f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-440039/.minikube"}}
	{"specversion":"1.0","id":"42e81714-95c2-4386-8c50-b42a20b151bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"43ca220a-3f00-472e-88bb-948d9e55b3bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"96068069-4846-4cc9-9cfb-099a19bf0d2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-401711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-401711
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-652777 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-652777 --network=: (42.299037426s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-652777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-652777
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-652777: (1.994882639s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.32s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-283543 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-283543 --network=bridge: (32.585256526s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-283543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-283543
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-283543: (2.025040971s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.64s)

                                                
                                    
x
+
TestKicExistingNetwork (32.16s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0920 18:35:06.250823  446783 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0920 18:35:06.266675  446783 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0920 18:35:06.266763  446783 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0920 18:35:06.266782  446783 cli_runner.go:164] Run: docker network inspect existing-network
W0920 18:35:06.283265  446783 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0920 18:35:06.283300  446783 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0920 18:35:06.283317  446783 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0920 18:35:06.283429  446783 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 18:35:06.299273  446783 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc05dbef80c5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d9:4b:c6:6a} reservation:<nil>}
I0920 18:35:06.299688  446783 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b65150}
I0920 18:35:06.299727  446783 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0920 18:35:06.299782  446783 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0920 18:35:06.372987  446783 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-012366 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-012366 --network=existing-network: (29.998506372s)
helpers_test.go:175: Cleaning up "existing-network-012366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-012366
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-012366: (2.006665936s)
I0920 18:35:38.394559  446783 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.16s)

                                                
                                    
x
+
TestKicCustomSubnet (33.46s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-039795 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-039795 --subnet=192.168.60.0/24: (31.299276835s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-039795 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-039795" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-039795
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-039795: (2.138184218s)
--- PASS: TestKicCustomSubnet (33.46s)

                                                
                                    
x
+
TestKicStaticIP (33.71s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-891961 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-891961 --static-ip=192.168.200.200: (31.303507904s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-891961 ip
helpers_test.go:175: Cleaning up "static-ip-891961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-891961
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-891961: (2.257838336s)
--- PASS: TestKicStaticIP (33.71s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.85s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-520180 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-520180 --driver=docker  --container-runtime=containerd: (30.526225767s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-522986 --driver=docker  --container-runtime=containerd
E0920 18:37:21.726448  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-522986 --driver=docker  --container-runtime=containerd: (31.702988609s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-520180
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-522986
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-522986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-522986
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-522986: (2.061486524s)
helpers_test.go:175: Cleaning up "first-520180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-520180
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-520180: (2.201821681s)
--- PASS: TestMinikubeProfile (67.85s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-511438 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-511438 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.546008547s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.6s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-511438 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.60s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-513374 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-513374 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.262439235s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-513374 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-511438 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-511438 --alsologtostderr -v=5: (1.644398547s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-513374 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-513374
E0920 18:38:10.958862  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-513374: (1.215896963s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.52s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-513374
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-513374: (6.517666994s)
--- PASS: TestMountStart/serial/RestartStopped (7.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-513374 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (68.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-436412 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-436412 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m8.046860153s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (68.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436412 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436412 -- rollout status deployment/busybox
E0920 18:39:34.026872  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-436412 -- rollout status deployment/busybox: (15.374075679s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436412 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436412 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436412 -- exec busybox-7dff88458-cq5lb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436412 -- exec busybox-7dff88458-v68bz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436412 -- exec busybox-7dff88458-cq5lb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436412 -- exec busybox-7dff88458-v68bz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436412 -- exec busybox-7dff88458-cq5lb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436412 -- exec busybox-7dff88458-v68bz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.46s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436412 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436412 -- exec busybox-7dff88458-cq5lb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436412 -- exec busybox-7dff88458-cq5lb -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436412 -- exec busybox-7dff88458-v68bz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-436412 -- exec busybox-7dff88458-v68bz -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-436412 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-436412 -v 3 --alsologtostderr: (16.350485811s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.03s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-436412 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 cp testdata/cp-test.txt multinode-436412:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 cp multinode-436412:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2953869982/001/cp-test_multinode-436412.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 cp multinode-436412:/home/docker/cp-test.txt multinode-436412-m02:/home/docker/cp-test_multinode-436412_multinode-436412-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412-m02 "sudo cat /home/docker/cp-test_multinode-436412_multinode-436412-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 cp multinode-436412:/home/docker/cp-test.txt multinode-436412-m03:/home/docker/cp-test_multinode-436412_multinode-436412-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412-m03 "sudo cat /home/docker/cp-test_multinode-436412_multinode-436412-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 cp testdata/cp-test.txt multinode-436412-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 cp multinode-436412-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2953869982/001/cp-test_multinode-436412-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 cp multinode-436412-m02:/home/docker/cp-test.txt multinode-436412:/home/docker/cp-test_multinode-436412-m02_multinode-436412.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412 "sudo cat /home/docker/cp-test_multinode-436412-m02_multinode-436412.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 cp multinode-436412-m02:/home/docker/cp-test.txt multinode-436412-m03:/home/docker/cp-test_multinode-436412-m02_multinode-436412-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412-m03 "sudo cat /home/docker/cp-test_multinode-436412-m02_multinode-436412-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 cp testdata/cp-test.txt multinode-436412-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 cp multinode-436412-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2953869982/001/cp-test_multinode-436412-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 cp multinode-436412-m03:/home/docker/cp-test.txt multinode-436412:/home/docker/cp-test_multinode-436412-m03_multinode-436412.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412 "sudo cat /home/docker/cp-test_multinode-436412-m03_multinode-436412.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 cp multinode-436412-m03:/home/docker/cp-test.txt multinode-436412-m02:/home/docker/cp-test_multinode-436412-m03_multinode-436412-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 ssh -n multinode-436412-m02 "sudo cat /home/docker/cp-test_multinode-436412-m03_multinode-436412-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-436412 node stop m03: (1.22505788s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-436412 status: exit status 7 (517.692823ms)

                                                
                                                
-- stdout --
	multinode-436412
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-436412-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-436412-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-436412 status --alsologtostderr: exit status 7 (582.798676ms)

                                                
                                                
-- stdout --
	multinode-436412
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-436412-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-436412-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:40:17.879978  567860 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:40:17.880141  567860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:40:17.880150  567860 out.go:358] Setting ErrFile to fd 2...
	I0920 18:40:17.880156  567860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:40:17.880394  567860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-440039/.minikube/bin
	I0920 18:40:17.880588  567860 out.go:352] Setting JSON to false
	I0920 18:40:17.880627  567860 mustload.go:65] Loading cluster: multinode-436412
	I0920 18:40:17.880732  567860 notify.go:220] Checking for updates...
	I0920 18:40:17.881110  567860 config.go:182] Loaded profile config "multinode-436412": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 18:40:17.881127  567860 status.go:174] checking status of multinode-436412 ...
	I0920 18:40:17.881717  567860 cli_runner.go:164] Run: docker container inspect multinode-436412 --format={{.State.Status}}
	I0920 18:40:17.901867  567860 status.go:364] multinode-436412 host status = "Running" (err=<nil>)
	I0920 18:40:17.901894  567860 host.go:66] Checking if "multinode-436412" exists ...
	I0920 18:40:17.902253  567860 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-436412
	I0920 18:40:17.943357  567860 host.go:66] Checking if "multinode-436412" exists ...
	I0920 18:40:17.943772  567860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:40:17.943827  567860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-436412
	I0920 18:40:17.964460  567860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/multinode-436412/id_rsa Username:docker}
	I0920 18:40:18.068295  567860 ssh_runner.go:195] Run: systemctl --version
	I0920 18:40:18.073346  567860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:40:18.086919  567860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:40:18.159587  567860 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-20 18:40:18.144989663 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:40:18.160207  567860 kubeconfig.go:125] found "multinode-436412" server: "https://192.168.67.2:8443"
	I0920 18:40:18.160247  567860 api_server.go:166] Checking apiserver status ...
	I0920 18:40:18.160297  567860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:40:18.172978  567860 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1389/cgroup
	I0920 18:40:18.183667  567860 api_server.go:182] apiserver freezer: "5:freezer:/docker/50f762f3c080445a3f37ea29738265f86aaa0e32a2ab22acc7645a566ddf0120/kubepods/burstable/pod85bb175147ad94db6f0dcd1bd6df52ca/4c5948080b5048ef1fb9d319f50452cab3a8d50703349a3c74de1b0977477cee"
	I0920 18:40:18.183753  567860 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/50f762f3c080445a3f37ea29738265f86aaa0e32a2ab22acc7645a566ddf0120/kubepods/burstable/pod85bb175147ad94db6f0dcd1bd6df52ca/4c5948080b5048ef1fb9d319f50452cab3a8d50703349a3c74de1b0977477cee/freezer.state
	I0920 18:40:18.193050  567860 api_server.go:204] freezer state: "THAWED"
	I0920 18:40:18.193084  567860 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0920 18:40:18.201037  567860 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0920 18:40:18.201069  567860 status.go:456] multinode-436412 apiserver status = Running (err=<nil>)
	I0920 18:40:18.201081  567860 status.go:176] multinode-436412 status: &{Name:multinode-436412 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:40:18.201099  567860 status.go:174] checking status of multinode-436412-m02 ...
	I0920 18:40:18.201434  567860 cli_runner.go:164] Run: docker container inspect multinode-436412-m02 --format={{.State.Status}}
	I0920 18:40:18.218856  567860 status.go:364] multinode-436412-m02 host status = "Running" (err=<nil>)
	I0920 18:40:18.218884  567860 host.go:66] Checking if "multinode-436412-m02" exists ...
	I0920 18:40:18.219191  567860 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-436412-m02
	I0920 18:40:18.245160  567860 host.go:66] Checking if "multinode-436412-m02" exists ...
	I0920 18:40:18.245551  567860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:40:18.245621  567860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-436412-m02
	I0920 18:40:18.264160  567860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19679-440039/.minikube/machines/multinode-436412-m02/id_rsa Username:docker}
	I0920 18:40:18.363937  567860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:40:18.377463  567860 status.go:176] multinode-436412-m02 status: &{Name:multinode-436412-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:40:18.377501  567860 status.go:174] checking status of multinode-436412-m03 ...
	I0920 18:40:18.377849  567860 cli_runner.go:164] Run: docker container inspect multinode-436412-m03 --format={{.State.Status}}
	I0920 18:40:18.394499  567860 status.go:364] multinode-436412-m03 host status = "Stopped" (err=<nil>)
	I0920 18:40:18.394525  567860 status.go:377] host is not running, skipping remaining checks
	I0920 18:40:18.394534  567860 status.go:176] multinode-436412-m03 status: &{Name:multinode-436412-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-436412 node start m03 -v=7 --alsologtostderr: (8.882775052s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (105.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-436412
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-436412
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-436412: (24.976239589s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-436412 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-436412 --wait=true -v=8 --alsologtostderr: (1m20.587096904s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-436412
--- PASS: TestMultiNode/serial/RestartKeepsNodes (105.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-436412 node delete m03: (4.889004052s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.59s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 stop
E0920 18:42:21.726457  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-436412 stop: (23.917649214s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-436412 status: exit status 7 (99.258139ms)

                                                
                                                
-- stdout --
	multinode-436412
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-436412-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-436412 status --alsologtostderr: exit status 7 (88.48354ms)

                                                
                                                
-- stdout --
	multinode-436412
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-436412-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:42:43.410252  576310 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:42:43.410467  576310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:42:43.410482  576310 out.go:358] Setting ErrFile to fd 2...
	I0920 18:42:43.410488  576310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:42:43.410765  576310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-440039/.minikube/bin
	I0920 18:42:43.410980  576310 out.go:352] Setting JSON to false
	I0920 18:42:43.411061  576310 mustload.go:65] Loading cluster: multinode-436412
	I0920 18:42:43.411126  576310 notify.go:220] Checking for updates...
	I0920 18:42:43.412373  576310 config.go:182] Loaded profile config "multinode-436412": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 18:42:43.412407  576310 status.go:174] checking status of multinode-436412 ...
	I0920 18:42:43.413218  576310 cli_runner.go:164] Run: docker container inspect multinode-436412 --format={{.State.Status}}
	I0920 18:42:43.431652  576310 status.go:364] multinode-436412 host status = "Stopped" (err=<nil>)
	I0920 18:42:43.431678  576310 status.go:377] host is not running, skipping remaining checks
	I0920 18:42:43.431687  576310 status.go:176] multinode-436412 status: &{Name:multinode-436412 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:42:43.431717  576310 status.go:174] checking status of multinode-436412-m02 ...
	I0920 18:42:43.432036  576310 cli_runner.go:164] Run: docker container inspect multinode-436412-m02 --format={{.State.Status}}
	I0920 18:42:43.449481  576310 status.go:364] multinode-436412-m02 host status = "Stopped" (err=<nil>)
	I0920 18:42:43.449502  576310 status.go:377] host is not running, skipping remaining checks
	I0920 18:42:43.449509  576310 status.go:176] multinode-436412-m02 status: &{Name:multinode-436412-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-436412 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0920 18:43:10.958932  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-436412 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (46.144878254s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-436412 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.95s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-436412
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-436412-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-436412-m02 --driver=docker  --container-runtime=containerd: exit status 14 (83.056801ms)

                                                
                                                
-- stdout --
	* [multinode-436412-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-440039/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-440039/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-436412-m02' is duplicated with machine name 'multinode-436412-m02' in profile 'multinode-436412'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-436412-m03 --driver=docker  --container-runtime=containerd
E0920 18:43:44.793108  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-436412-m03 --driver=docker  --container-runtime=containerd: (31.856977411s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-436412
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-436412: exit status 80 (341.335271ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-436412 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-436412-m03 already exists in multinode-436412-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-436412-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-436412-m03: (1.969223278s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.32s)

                                                
                                    
x
+
TestPreload (127.41s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-521391 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-521391 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m26.241894869s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-521391 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-521391 image pull gcr.io/k8s-minikube/busybox: (1.923319316s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-521391
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-521391: (1.233666397s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-521391 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-521391 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (35.394066881s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-521391 image list
helpers_test.go:175: Cleaning up "test-preload-521391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-521391
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-521391: (2.375165015s)
--- PASS: TestPreload (127.41s)

                                                
                                    
x
+
TestScheduledStopUnix (107.26s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-616710 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-616710 --memory=2048 --driver=docker  --container-runtime=containerd: (31.12005124s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-616710 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-616710 -n scheduled-stop-616710
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-616710 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 18:46:47.712077  446783 retry.go:31] will retry after 123.769µs: open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/scheduled-stop-616710/pid: no such file or directory
I0920 18:46:47.713305  446783 retry.go:31] will retry after 170.221µs: open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/scheduled-stop-616710/pid: no such file or directory
I0920 18:46:47.714472  446783 retry.go:31] will retry after 199.571µs: open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/scheduled-stop-616710/pid: no such file or directory
I0920 18:46:47.714928  446783 retry.go:31] will retry after 449.657µs: open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/scheduled-stop-616710/pid: no such file or directory
I0920 18:46:47.715575  446783 retry.go:31] will retry after 450.502µs: open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/scheduled-stop-616710/pid: no such file or directory
I0920 18:46:47.716705  446783 retry.go:31] will retry after 868.968µs: open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/scheduled-stop-616710/pid: no such file or directory
I0920 18:46:47.717833  446783 retry.go:31] will retry after 1.515637ms: open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/scheduled-stop-616710/pid: no such file or directory
I0920 18:46:47.720031  446783 retry.go:31] will retry after 2.154101ms: open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/scheduled-stop-616710/pid: no such file or directory
I0920 18:46:47.723198  446783 retry.go:31] will retry after 2.665302ms: open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/scheduled-stop-616710/pid: no such file or directory
I0920 18:46:47.726380  446783 retry.go:31] will retry after 1.949943ms: open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/scheduled-stop-616710/pid: no such file or directory
I0920 18:46:47.728592  446783 retry.go:31] will retry after 5.233222ms: open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/scheduled-stop-616710/pid: no such file or directory
I0920 18:46:47.734836  446783 retry.go:31] will retry after 6.810553ms: open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/scheduled-stop-616710/pid: no such file or directory
I0920 18:46:47.742060  446783 retry.go:31] will retry after 9.664806ms: open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/scheduled-stop-616710/pid: no such file or directory
I0920 18:46:47.752286  446783 retry.go:31] will retry after 12.405501ms: open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/scheduled-stop-616710/pid: no such file or directory
I0920 18:46:47.765505  446783 retry.go:31] will retry after 32.51038ms: open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/scheduled-stop-616710/pid: no such file or directory
I0920 18:46:47.798828  446783 retry.go:31] will retry after 42.231591ms: open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/scheduled-stop-616710/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-616710 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-616710 -n scheduled-stop-616710
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-616710
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-616710 --schedule 15s
E0920 18:47:21.726442  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-616710
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-616710: exit status 7 (69.526134ms)

                                                
                                                
-- stdout --
	scheduled-stop-616710
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-616710 -n scheduled-stop-616710
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-616710 -n scheduled-stop-616710: exit status 7 (66.3364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-616710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-616710
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-616710: (4.573317576s)
--- PASS: TestScheduledStopUnix (107.26s)

                                                
                                    
x
+
TestInsufficientStorage (12.95s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-197305 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
E0920 18:48:10.958465  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-197305 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.454529886s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4b63e9b7-b7c5-4b52-9b16-04c5fbb702b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-197305] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f547a7c-f31c-4a23-88b6-028c004b1791","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19679"}}
	{"specversion":"1.0","id":"193a42fb-bbd6-4056-a4ae-fb90d2c2ae6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a1ad1a3f-21c0-4e25-bd30-2a590af79e06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19679-440039/kubeconfig"}}
	{"specversion":"1.0","id":"ccd7243c-6d60-4ff8-b545-a82d1ba3063c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-440039/.minikube"}}
	{"specversion":"1.0","id":"df0fd601-0128-409f-bc33-13a21b9ba752","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f22c4fec-669f-4fae-8e4e-6f117b1a8d99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d00ed841-0e54-4d2a-9a01-3846bd5c4985","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a286670e-56b3-4ca9-b855-a263757087a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4dfed0fc-91e5-4ebf-b843-3a64dec83b49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e83ed05-3fe0-4d69-9bd4-c425166d805b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e5d236a3-d127-4c4a-be0f-86d044f0f712","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-197305\" primary control-plane node in \"insufficient-storage-197305\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f70c9384-602f-4111-a8a4-969c028bd079","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"17c2e1f5-b0a2-43bc-bc82-4bd53214888d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"23f13be8-6ec4-483a-a961-79594dae6411","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-197305 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-197305 --output=json --layout=cluster: exit status 7 (304.477361ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-197305","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-197305","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:48:14.085061  594855 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-197305" does not appear in /home/jenkins/minikube-integration/19679-440039/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-197305 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-197305 --output=json --layout=cluster: exit status 7 (295.969876ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-197305","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-197305","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:48:14.381474  594915 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-197305" does not appear in /home/jenkins/minikube-integration/19679-440039/kubeconfig
	E0920 18:48:14.392110  594915 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/insufficient-storage-197305/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-197305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-197305
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-197305: (1.894945845s)
--- PASS: TestInsufficientStorage (12.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (86.25s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2055986403 start -p running-upgrade-332817 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2055986403 start -p running-upgrade-332817 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.879655863s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-332817 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-332817 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (38.001404753s)
helpers_test.go:175: Cleaning up "running-upgrade-332817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-332817
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-332817: (2.671387107s)
--- PASS: TestRunningBinaryUpgrade (86.25s)

                                                
                                    
x
+
TestKubernetesUpgrade (347.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-443835 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-443835 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (56.787663159s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-443835
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-443835: (3.024930731s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-443835 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-443835 status --format={{.Host}}: exit status 7 (104.890289ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-443835 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-443835 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m36.598919055s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-443835 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-443835 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-443835 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (94.05415ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-443835] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-440039/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-440039/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-443835
	    minikube start -p kubernetes-upgrade-443835 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4438352 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-443835 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-443835 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-443835 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.943819334s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-443835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-443835
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-443835: (2.625101495s)
--- PASS: TestKubernetesUpgrade (347.32s)

                                                
                                    
x
+
TestMissingContainerUpgrade (183.58s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3081853379 start -p missing-upgrade-050625 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3081853379 start -p missing-upgrade-050625 --memory=2200 --driver=docker  --container-runtime=containerd: (1m39.560658734s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-050625
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-050625: (10.29850386s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-050625
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-050625 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-050625 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m10.403729006s)
helpers_test.go:175: Cleaning up "missing-upgrade-050625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-050625
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-050625: (2.63775781s)
--- PASS: TestMissingContainerUpgrade (183.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-112786 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-112786 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (84.332641ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-112786] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-440039/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-440039/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-112786 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-112786 --driver=docker  --container-runtime=containerd: (38.222645433s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-112786 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-112786 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-112786 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.60634745s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-112786 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-112786 status -o json: exit status 2 (394.926267ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-112786","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-112786
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-112786: (3.166665047s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-112786 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-112786 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.905909236s)
--- PASS: TestNoKubernetes/serial/Start (8.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-112786 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-112786 "sudo systemctl is-active --quiet service kubelet": exit status 1 (262.423373ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-112786
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-112786: (1.204715622s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-112786 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-112786 --driver=docker  --container-runtime=containerd: (7.132906064s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-112786 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-112786 "sudo systemctl is-active --quiet service kubelet": exit status 1 (371.988587ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (110.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3918382848 start -p stopped-upgrade-964165 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3918382848 start -p stopped-upgrade-964165 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.597945901s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3918382848 -p stopped-upgrade-964165 stop
E0920 18:52:21.726234  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3918382848 -p stopped-upgrade-964165 stop: (19.989908126s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-964165 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0920 18:53:10.958432  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-964165 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (42.063693273s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (110.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-964165
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-964165: (1.285155379s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                    
x
+
TestPause/serial/Start (56.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-342625 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-342625 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (56.883203824s)
--- PASS: TestPause/serial/Start (56.88s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.36s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-342625 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-342625 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.324241734s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.36s)

                                                
                                    
x
+
TestPause/serial/Pause (1.07s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-342625 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-342625 --alsologtostderr -v=5: (1.06816717s)
--- PASS: TestPause/serial/Pause (1.07s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-342625 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-342625 --output=json --layout=cluster: exit status 2 (440.019672ms)

                                                
                                                
-- stdout --
	{"Name":"pause-342625","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-342625","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.92s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-342625 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.92s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.92s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-342625 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.92s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.91s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-342625 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-342625 --alsologtostderr -v=5: (2.906166773s)
--- PASS: TestPause/serial/DeletePaused (2.91s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.77s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-342625
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-342625: exit status 1 (35.919871ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-342625: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-428619 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-428619 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (278.245897ms)

                                                
                                                
-- stdout --
	* [false-428619] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-440039/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-440039/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:55:58.583074  636789 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:55:58.586630  636789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:55:58.586680  636789 out.go:358] Setting ErrFile to fd 2...
	I0920 18:55:58.586702  636789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:55:58.586998  636789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-440039/.minikube/bin
	I0920 18:55:58.587526  636789 out.go:352] Setting JSON to false
	I0920 18:55:58.588537  636789 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9510,"bootTime":1726849049,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 18:55:58.588638  636789 start.go:139] virtualization:  
	I0920 18:55:58.592657  636789 out.go:177] * [false-428619] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 18:55:58.594584  636789 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:55:58.594778  636789 notify.go:220] Checking for updates...
	I0920 18:55:58.602490  636789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:55:58.604693  636789 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-440039/kubeconfig
	I0920 18:55:58.606982  636789 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-440039/.minikube
	I0920 18:55:58.609012  636789 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 18:55:58.611269  636789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:55:58.614014  636789 config.go:182] Loaded profile config "force-systemd-flag-226210": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 18:55:58.614134  636789 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:55:58.660040  636789 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:55:58.660170  636789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:55:58.759995  636789 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:41 SystemTime:2024-09-20 18:55:58.748904591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:55:58.760102  636789 docker.go:318] overlay module found
	I0920 18:55:58.763570  636789 out.go:177] * Using the docker driver based on user configuration
	I0920 18:55:58.765477  636789 start.go:297] selected driver: docker
	I0920 18:55:58.765505  636789 start.go:901] validating driver "docker" against <nil>
	I0920 18:55:58.765521  636789 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:55:58.768097  636789 out.go:201] 
	W0920 18:55:58.769915  636789 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0920 18:55:58.771744  636789 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-428619 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-428619

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-428619

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-428619

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-428619

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-428619

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-428619

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-428619

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-428619

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-428619

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-428619

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-428619

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-428619" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-428619" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-428619

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428619"

                                                
                                                
----------------------- debugLogs end: false-428619 [took: 4.85371519s] --------------------------------
helpers_test.go:175: Cleaning up "false-428619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-428619
--- PASS: TestNetworkPlugins/group/false (5.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (161.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-809747 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0920 18:57:21.726406  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:58:10.958724  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-809747 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m41.18429717s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (161.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-851913 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-851913 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m18.517900667s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-809747 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [317d74ba-cb45-46bc-b810-bed1a748012e] Pending
helpers_test.go:344: "busybox" [317d74ba-cb45-46bc-b810-bed1a748012e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [317d74ba-cb45-46bc-b810-bed1a748012e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005226371s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-809747 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-809747 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-809747 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.139078131s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-809747 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-809747 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-809747 --alsologtostderr -v=3: (12.414269902s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-809747 -n old-k8s-version-809747
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-809747 -n old-k8s-version-809747: exit status 7 (101.13431ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-809747 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0920 19:00:24.794384  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-851913 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4b858ce7-60ad-4e07-8072-512295f9a399] Pending
helpers_test.go:344: "busybox" [4b858ce7-60ad-4e07-8072-512295f9a399] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4b858ce7-60ad-4e07-8072-512295f9a399] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00513277s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-851913 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-851913 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-851913 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.185750225s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-851913 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-851913 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-851913 --alsologtostderr -v=3: (12.125806001s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-851913 -n no-preload-851913
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-851913 -n no-preload-851913: exit status 7 (71.281019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-851913 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-851913 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0920 19:02:21.726738  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:10.959162  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-851913 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.293760523s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-851913 -n no-preload-851913
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8v2qt" [ae259fbe-0419-445c-bd85-2fc2e46de9bd] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005100535s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8v2qt" [ae259fbe-0419-445c-bd85-2fc2e46de9bd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005046241s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-851913 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-851913 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-851913 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-851913 -n no-preload-851913
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-851913 -n no-preload-851913: exit status 2 (323.682158ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-851913 -n no-preload-851913
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-851913 -n no-preload-851913: exit status 2 (332.423182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-851913 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-851913 -n no-preload-851913
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-851913 -n no-preload-851913
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-208780 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-208780 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m22.337802841s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xd568" [6bbbb028-72b2-4734-a0d8-eb43a1e62800] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003294102s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xd568" [6bbbb028-72b2-4734-a0d8-eb43a1e62800] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004723983s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-809747 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-809747 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-809747 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-809747 --alsologtostderr -v=1: (1.110936571s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-809747 -n old-k8s-version-809747
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-809747 -n old-k8s-version-809747: exit status 2 (369.565056ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-809747 -n old-k8s-version-809747
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-809747 -n old-k8s-version-809747: exit status 2 (351.786944ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-809747 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-809747 -n old-k8s-version-809747
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-809747 -n old-k8s-version-809747
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-178984 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0920 19:07:21.726008  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-178984 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m22.464227867s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-208780 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5931194e-f014-4c0f-ac79-fd4e45d15634] Pending
helpers_test.go:344: "busybox" [5931194e-f014-4c0f-ac79-fd4e45d15634] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5931194e-f014-4c0f-ac79-fd4e45d15634] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003884794s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-208780 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-208780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-208780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.043182339s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-208780 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-208780 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-208780 --alsologtostderr -v=3: (12.113100725s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-208780 -n embed-certs-208780
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-208780 -n embed-certs-208780: exit status 7 (85.590737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-208780 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-208780 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0920 19:08:10.958715  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-208780 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m26.247231397s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-208780 -n embed-certs-208780
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-178984 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f68f5037-9cdf-4f66-a7ae-addb01b539d1] Pending
helpers_test.go:344: "busybox" [f68f5037-9cdf-4f66-a7ae-addb01b539d1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f68f5037-9cdf-4f66-a7ae-addb01b539d1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.014275716s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-178984 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-178984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-178984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.097489889s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-178984 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-178984 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-178984 --alsologtostderr -v=3: (12.144186511s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-178984 -n default-k8s-diff-port-178984
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-178984 -n default-k8s-diff-port-178984: exit status 7 (68.963482ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-178984 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-178984 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0920 19:10:01.512352  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:01.518878  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:01.530267  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:01.551744  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:01.593273  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:01.674715  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:01.836498  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:02.158140  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:02.799425  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:04.080894  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:06.642594  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:11.764802  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:22.007036  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:10:42.488691  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:14.924274  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:14.930668  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:14.942175  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:14.963667  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:15.006649  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:15.088185  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:15.250431  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:15.572296  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:16.214045  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:17.495406  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:20.057114  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:23.450576  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:25.178550  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:35.420323  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:11:55.901810  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:12:21.726692  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-178984 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m28.01396963s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-178984 -n default-k8s-diff-port-178984
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hpk7l" [8864ac21-d2be-43e0-ba3c-f51f07707647] Running
E0920 19:12:36.863507  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004082723s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hpk7l" [8864ac21-d2be-43e0-ba3c-f51f07707647] Running
E0920 19:12:45.372664  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003918156s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-208780 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-208780 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-208780 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-208780 -n embed-certs-208780
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-208780 -n embed-certs-208780: exit status 2 (333.309378ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-208780 -n embed-certs-208780
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-208780 -n embed-certs-208780: exit status 2 (339.551992ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-208780 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-208780 -n embed-certs-208780
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-208780 -n embed-certs-208780
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-716002 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0920 19:12:54.031371  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:13:10.958684  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-716002 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (35.788390854s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-nkq8q" [72c77b42-1a51-442b-8fc3-376193c1827d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003560127s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-nkq8q" [72c77b42-1a51-442b-8fc3-376193c1827d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005083824s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-178984 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-716002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-716002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.016460081s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-178984 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-178984 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-178984 --alsologtostderr -v=1: (1.191433979s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-178984 -n default-k8s-diff-port-178984
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-178984 -n default-k8s-diff-port-178984: exit status 2 (443.429456ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-178984 -n default-k8s-diff-port-178984
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-178984 -n default-k8s-diff-port-178984: exit status 2 (459.837238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-178984 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-178984 -n default-k8s-diff-port-178984
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-178984 -n default-k8s-diff-port-178984
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-716002 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-716002 --alsologtostderr -v=3: (1.351394577s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-716002 -n newest-cni-716002
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-716002 -n newest-cni-716002: exit status 7 (105.049683ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-716002 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-716002 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-716002 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (22.064644526s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-716002 -n newest-cni-716002
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-428619 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-428619 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m26.53870997s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-716002 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-716002 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-716002 -n newest-cni-716002
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-716002 -n newest-cni-716002: exit status 2 (356.101608ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-716002 -n newest-cni-716002
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-716002 -n newest-cni-716002: exit status 2 (434.996995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-716002 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-716002 -n newest-cni-716002
E0920 19:13:58.785533  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-716002 -n newest-cni-716002
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.59s)
E0920 19:19:47.937478  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/default-k8s-diff-port-178984/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:01.511545  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:04.696818  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/auto-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:04.703324  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/auto-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:04.714744  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/auto-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:04.736215  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/auto-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:04.777711  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/auto-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:04.859792  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/auto-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:05.021351  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/auto-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:05.343455  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/auto-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:05.985304  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/auto-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:07.267565  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/auto-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:09.829214  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/auto-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:14.951367  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/auto-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:25.193589  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/auto-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:28.527750  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/kindnet-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:28.534260  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/kindnet-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:28.545737  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/kindnet-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:28.567164  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/kindnet-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:28.608721  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/kindnet-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:28.691141  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/kindnet-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:28.852705  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/kindnet-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:29.174386  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/kindnet-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:29.816084  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/kindnet-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:31.097827  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/kindnet-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:33.659243  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/kindnet-428619/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:38.781408  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/kindnet-428619/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (86.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-428619 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0920 19:15:01.512395  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-428619 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m26.381385809s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (86.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-428619 "pgrep -a kubelet"
I0920 19:15:04.421804  446783 config.go:182] Loaded profile config "auto-428619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-428619 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-n7k8l" [0c9a3b0d-5e2d-45bb-a085-00636bda8f4f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-n7k8l" [0c9a3b0d-5e2d-45bb-a085-00636bda8f4f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004820773s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-428619 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-428619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-428619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7dwbd" [821780bd-7f10-4688-9968-0a432c62b613] Running
E0920 19:15:29.214097  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/old-k8s-version-809747/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004529067s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-428619 "pgrep -a kubelet"
I0920 19:15:34.876468  446783 config.go:182] Loaded profile config "kindnet-428619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-428619 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4svz7" [c56b0db3-0f06-4bb2-88ad-aff205c73649] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4svz7" [c56b0db3-0f06-4bb2-88ad-aff205c73649] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004355525s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-428619 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-428619 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m12.73260809s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-428619 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-428619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-428619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-428619 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0920 19:16:14.924198  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:16:42.627306  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/no-preload-851913/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-428619 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (56.342578865s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2ps64" [1491d239-3419-447f-a740-842d39f9467c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004241849s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-428619 "pgrep -a kubelet"
I0920 19:16:57.107251  446783 config.go:182] Loaded profile config "calico-428619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-428619 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9dm9v" [dbb3ff6e-b2d9-497a-bce4-64d0889356e9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9dm9v" [dbb3ff6e-b2d9-497a-bce4-64d0889356e9] Running
E0920 19:17:04.795624  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/functional-252518/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004744969s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-428619 "pgrep -a kubelet"
I0920 19:17:07.723798  446783 config.go:182] Loaded profile config "custom-flannel-428619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-428619 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vmwnx" [ff137c22-3207-4846-bdb9-0b5177ee2f1d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vmwnx" [ff137c22-3207-4846-bdb9-0b5177ee2f1d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.006274344s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-428619 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-428619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-428619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-428619 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-428619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-428619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-428619 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-428619 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m23.407311047s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-428619 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0920 19:18:10.958963  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/addons-610387/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:25.998954  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/default-k8s-diff-port-178984/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:26.005368  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/default-k8s-diff-port-178984/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:26.016799  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/default-k8s-diff-port-178984/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:26.038210  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/default-k8s-diff-port-178984/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:26.079642  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/default-k8s-diff-port-178984/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:26.161153  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/default-k8s-diff-port-178984/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:26.322869  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/default-k8s-diff-port-178984/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:26.644983  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/default-k8s-diff-port-178984/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:27.286771  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/default-k8s-diff-port-178984/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:28.568833  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/default-k8s-diff-port-178984/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:31.130650  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/default-k8s-diff-port-178984/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:36.252874  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/default-k8s-diff-port-178984/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-428619 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (59.84773613s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-bl8w6" [e709134c-2bd5-460e-9275-ef2ced6f09e8] Running
E0920 19:18:46.494594  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/default-k8s-diff-port-178984/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00442781s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-428619 "pgrep -a kubelet"
I0920 19:18:51.944198  446783 config.go:182] Loaded profile config "flannel-428619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-428619 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-q89qg" [63a28aac-68e9-4905-9174-a4b4edf07084] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-q89qg" [63a28aac-68e9-4905-9174-a4b4edf07084] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.013277232s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-428619 "pgrep -a kubelet"
I0920 19:18:58.191668  446783 config.go:182] Loaded profile config "enable-default-cni-428619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-428619 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d7zj6" [7161c270-ea01-45d8-b5ff-ba6599232188] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-d7zj6" [7161c270-ea01-45d8-b5ff-ba6599232188] Running
E0920 19:19:06.976099  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/default-k8s-diff-port-178984/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00383293s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-428619 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-428619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-428619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-428619 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-428619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-428619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-428619 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-428619 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m16.270798912s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-428619 "pgrep -a kubelet"
I0920 19:20:43.116652  446783 config.go:182] Loaded profile config "bridge-428619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-428619 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h2f5d" [8fc40f12-f8aa-4898-8534-ddac4ef28ace] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 19:20:45.675992  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/auto-428619/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-h2f5d" [8fc40f12-f8aa-4898-8534-ddac4ef28ace] Running
E0920 19:20:49.022751  446783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/kindnet-428619/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003652202s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-428619 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-428619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-428619 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-026476 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-026476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-026476
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-741509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-741509
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-428619 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-428619

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-428619

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-428619

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-428619

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-428619

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-428619

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-428619

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-428619

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-428619

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-428619

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-428619

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-428619" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-428619" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19679-440039/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 18:55:52 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-226210
contexts:
- context:
cluster: force-systemd-flag-226210
extensions:
- extension:
last-update: Fri, 20 Sep 2024 18:55:52 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: force-systemd-flag-226210
name: force-systemd-flag-226210
current-context: force-systemd-flag-226210
kind: Config
preferences: {}
users:
- name: force-systemd-flag-226210
user:
client-certificate: /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/force-systemd-flag-226210/client.crt
client-key: /home/jenkins/minikube-integration/19679-440039/.minikube/profiles/force-systemd-flag-226210/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-428619

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428619"

                                                
                                                
----------------------- debugLogs end: kubenet-428619 [took: 5.269589978s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-428619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-428619
--- SKIP: TestNetworkPlugins/group/kubenet (5.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-428619 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-428619

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-428619

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-428619

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-428619

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-428619

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-428619

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-428619

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-428619

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-428619

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-428619

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-428619

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-428619" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-428619

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-428619

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-428619

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-428619

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-428619" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-428619" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-428619

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-428619" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428619"

                                                
                                                
----------------------- debugLogs end: cilium-428619 [took: 5.366019381s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-428619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-428619
--- SKIP: TestNetworkPlugins/group/cilium (5.57s)

                                                
                                    
Copied to clipboard