Test Report: Docker_Linux_containerd_arm64 19672

                    
                      d6d2a37830b251a8a712eec07ee86a534797346d:2024-09-20:36297
                    
                

Test fail (2/327)

Order failed test Duration
29 TestAddons/serial/Volcano 199.87
301 TestStartStop/group/old-k8s-version/serial/SecondStart 374.59
x
+
TestAddons/serial/Volcano (199.87s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:843: volcano-admission stabilized in 52.649049ms
addons_test.go:851: volcano-controller stabilized in 52.807457ms
addons_test.go:835: volcano-scheduler stabilized in 52.87268ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-hrx4m" [638d60be-5f83-402c-95ff-c3db8661f150] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003812667s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-zt55r" [18c1842f-c54c-41ec-ba6d-409238959f7d] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003503647s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-nrv6p" [ce3e043f-7369-490f-8c37-b30e3e841892] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.006287585s
addons_test.go:870: (dbg) Run:  kubectl --context addons-545041 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-545041 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-545041 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a33fe9e3-534f-4bf3-8e60-342c71644e7c] Pending
helpers_test.go:344: "test-job-nginx-0" [a33fe9e3-534f-4bf3-8e60-342c71644e7c] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:902: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:902: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-545041 -n addons-545041
addons_test.go:902: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-20 17:46:46.145312286 +0000 UTC m=+431.310756200
addons_test.go:902: (dbg) Run:  kubectl --context addons-545041 describe po test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-545041 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-0bfd5ff8-71a5-4e9d-806e-cf61d13bf612
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nfqg8 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-nfqg8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age   From     Message
----     ------            ----  ----     -------
Warning  FailedScheduling  3m    volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:902: (dbg) Run:  kubectl --context addons-545041 logs test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-545041 logs test-job-nginx-0 -n my-volcano:
addons_test.go:903: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-545041
helpers_test.go:235: (dbg) docker inspect addons-545041:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aed7ef0baeef2c3beca07353f15287999c50eff9f37684cdbf6951eec309b31a",
	        "Created": "2024-09-20T17:40:15.724737123Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300934,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T17:40:15.870201681Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/aed7ef0baeef2c3beca07353f15287999c50eff9f37684cdbf6951eec309b31a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aed7ef0baeef2c3beca07353f15287999c50eff9f37684cdbf6951eec309b31a/hostname",
	        "HostsPath": "/var/lib/docker/containers/aed7ef0baeef2c3beca07353f15287999c50eff9f37684cdbf6951eec309b31a/hosts",
	        "LogPath": "/var/lib/docker/containers/aed7ef0baeef2c3beca07353f15287999c50eff9f37684cdbf6951eec309b31a/aed7ef0baeef2c3beca07353f15287999c50eff9f37684cdbf6951eec309b31a-json.log",
	        "Name": "/addons-545041",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-545041:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-545041",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dfbb96406d801b139ea3755f64b96cefce82275a7d3a7673fb004350e35da2a9-init/diff:/var/lib/docker/overlay2/3c4c9ed4137da049c491f1302314a8de7bd30a1897b7cd29bbcd1724ef9b7a93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dfbb96406d801b139ea3755f64b96cefce82275a7d3a7673fb004350e35da2a9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dfbb96406d801b139ea3755f64b96cefce82275a7d3a7673fb004350e35da2a9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dfbb96406d801b139ea3755f64b96cefce82275a7d3a7673fb004350e35da2a9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-545041",
	                "Source": "/var/lib/docker/volumes/addons-545041/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-545041",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-545041",
	                "name.minikube.sigs.k8s.io": "addons-545041",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "97dde973e2f45c5b697a29991f610081bef2d470429a25c19e1ad21a02deead1",
	            "SandboxKey": "/var/run/docker/netns/97dde973e2f4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-545041": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c76c89ef4e139d33c9c6962966fad9c9bf51e7d5aa755182a07f1aae8caa388f",
	                    "EndpointID": "296060e0338a9c897b6cf02769a868fffae95536d5528efd108b3086dc1c6e52",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-545041",
	                        "aed7ef0baeef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-545041 -n addons-545041
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-545041 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-545041 logs -n 25: (1.539902053s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-095043   | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC |                     |
	|         | -p download-only-095043              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC | 20 Sep 24 17:39 UTC |
	| delete  | -p download-only-095043              | download-only-095043   | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC | 20 Sep 24 17:39 UTC |
	| start   | -o=json --download-only              | download-only-824252   | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC |                     |
	|         | -p download-only-824252              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC | 20 Sep 24 17:39 UTC |
	| delete  | -p download-only-824252              | download-only-824252   | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC | 20 Sep 24 17:39 UTC |
	| delete  | -p download-only-095043              | download-only-095043   | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC | 20 Sep 24 17:39 UTC |
	| delete  | -p download-only-824252              | download-only-824252   | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC | 20 Sep 24 17:39 UTC |
	| start   | --download-only -p                   | download-docker-282367 | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC |                     |
	|         | download-docker-282367               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-282367            | download-docker-282367 | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC | 20 Sep 24 17:39 UTC |
	| start   | --download-only -p                   | binary-mirror-765360   | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC |                     |
	|         | binary-mirror-765360                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44001               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-765360              | binary-mirror-765360   | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC | 20 Sep 24 17:39 UTC |
	| addons  | enable dashboard -p                  | addons-545041          | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC |                     |
	|         | addons-545041                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-545041          | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC |                     |
	|         | addons-545041                        |                        |         |         |                     |                     |
	| start   | -p addons-545041 --wait=true         | addons-545041          | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC | 20 Sep 24 17:43 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:39:50
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:39:50.922120  300442 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:39:50.922333  300442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:39:50.922359  300442 out.go:358] Setting ErrFile to fd 2...
	I0920 17:39:50.922379  300442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:39:50.922646  300442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-294290/.minikube/bin
	I0920 17:39:50.923165  300442 out.go:352] Setting JSON to false
	I0920 17:39:50.924044  300442 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4941,"bootTime":1726849050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0920 17:39:50.924134  300442 start.go:139] virtualization:  
	I0920 17:39:50.927055  300442 out.go:177] * [addons-545041] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 17:39:50.929349  300442 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:39:50.929400  300442 notify.go:220] Checking for updates...
	I0920 17:39:50.933836  300442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:39:50.935849  300442 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-294290/kubeconfig
	I0920 17:39:50.937611  300442 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-294290/.minikube
	I0920 17:39:50.939299  300442 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 17:39:50.941178  300442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:39:50.943291  300442 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:39:50.967146  300442 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 17:39:50.967272  300442 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:39:51.027161  300442 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 17:39:51.016872814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:39:51.027286  300442 docker.go:318] overlay module found
	I0920 17:39:51.031484  300442 out.go:177] * Using the docker driver based on user configuration
	I0920 17:39:51.033497  300442 start.go:297] selected driver: docker
	I0920 17:39:51.033525  300442 start.go:901] validating driver "docker" against <nil>
	I0920 17:39:51.033540  300442 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:39:51.034242  300442 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:39:51.089687  300442 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 17:39:51.080035197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:39:51.089906  300442 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:39:51.090155  300442 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:39:51.092404  300442 out.go:177] * Using Docker driver with root privileges
	I0920 17:39:51.094520  300442 cni.go:84] Creating CNI manager for ""
	I0920 17:39:51.094600  300442 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 17:39:51.094613  300442 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 17:39:51.094705  300442 start.go:340] cluster config:
	{Name:addons-545041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-545041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:39:51.097008  300442 out.go:177] * Starting "addons-545041" primary control-plane node in "addons-545041" cluster
	I0920 17:39:51.098853  300442 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0920 17:39:51.100972  300442 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0920 17:39:51.102900  300442 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 17:39:51.102968  300442 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-294290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0920 17:39:51.102981  300442 cache.go:56] Caching tarball of preloaded images
	I0920 17:39:51.102987  300442 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 17:39:51.103145  300442 preload.go:172] Found /home/jenkins/minikube-integration/19672-294290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 17:39:51.103158  300442 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0920 17:39:51.103529  300442 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/config.json ...
	I0920 17:39:51.103562  300442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/config.json: {Name:mkedf171ca75aed0795849a0adf6447951e7c25f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:39:51.119094  300442 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 17:39:51.119206  300442 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 17:39:51.119225  300442 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0920 17:39:51.119230  300442 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0920 17:39:51.119237  300442 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0920 17:39:51.119243  300442 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0920 17:40:08.709637  300442 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0920 17:40:08.709677  300442 cache.go:194] Successfully downloaded all kic artifacts
	I0920 17:40:08.709719  300442 start.go:360] acquireMachinesLock for addons-545041: {Name:mkd4fbf90a68809e85b139772005b0681850c18b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:40:08.709843  300442 start.go:364] duration metric: took 100.701µs to acquireMachinesLock for "addons-545041"
	I0920 17:40:08.709888  300442 start.go:93] Provisioning new machine with config: &{Name:addons-545041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-545041 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0920 17:40:08.709970  300442 start.go:125] createHost starting for "" (driver="docker")
	I0920 17:40:08.712258  300442 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 17:40:08.712510  300442 start.go:159] libmachine.API.Create for "addons-545041" (driver="docker")
	I0920 17:40:08.712545  300442 client.go:168] LocalClient.Create starting
	I0920 17:40:08.712661  300442 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca.pem
	I0920 17:40:09.050066  300442 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19672-294290/.minikube/certs/cert.pem
	I0920 17:40:09.448736  300442 cli_runner.go:164] Run: docker network inspect addons-545041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 17:40:09.465836  300442 cli_runner.go:211] docker network inspect addons-545041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 17:40:09.465918  300442 network_create.go:284] running [docker network inspect addons-545041] to gather additional debugging logs...
	I0920 17:40:09.465956  300442 cli_runner.go:164] Run: docker network inspect addons-545041
	W0920 17:40:09.480690  300442 cli_runner.go:211] docker network inspect addons-545041 returned with exit code 1
	I0920 17:40:09.480725  300442 network_create.go:287] error running [docker network inspect addons-545041]: docker network inspect addons-545041: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-545041 not found
	I0920 17:40:09.480739  300442 network_create.go:289] output of [docker network inspect addons-545041]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-545041 not found
	
	** /stderr **
	I0920 17:40:09.480857  300442 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 17:40:09.499499  300442 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e66c0}
	I0920 17:40:09.499539  300442 network_create.go:124] attempt to create docker network addons-545041 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 17:40:09.499598  300442 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-545041 addons-545041
	I0920 17:40:09.578172  300442 network_create.go:108] docker network addons-545041 192.168.49.0/24 created
	I0920 17:40:09.578209  300442 kic.go:121] calculated static IP "192.168.49.2" for the "addons-545041" container
	I0920 17:40:09.578296  300442 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 17:40:09.592995  300442 cli_runner.go:164] Run: docker volume create addons-545041 --label name.minikube.sigs.k8s.io=addons-545041 --label created_by.minikube.sigs.k8s.io=true
	I0920 17:40:09.610108  300442 oci.go:103] Successfully created a docker volume addons-545041
	I0920 17:40:09.610207  300442 cli_runner.go:164] Run: docker run --rm --name addons-545041-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-545041 --entrypoint /usr/bin/test -v addons-545041:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0920 17:40:11.611726  300442 cli_runner.go:217] Completed: docker run --rm --name addons-545041-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-545041 --entrypoint /usr/bin/test -v addons-545041:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.001473122s)
	I0920 17:40:11.611756  300442 oci.go:107] Successfully prepared a docker volume addons-545041
	I0920 17:40:11.611784  300442 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 17:40:11.611804  300442 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 17:40:11.611873  300442 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19672-294290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-545041:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 17:40:15.662423  300442 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19672-294290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-545041:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (4.050509919s)
	I0920 17:40:15.662455  300442 kic.go:203] duration metric: took 4.050648651s to extract preloaded images to volume ...
	W0920 17:40:15.662585  300442 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 17:40:15.662698  300442 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 17:40:15.711436  300442 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-545041 --name addons-545041 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-545041 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-545041 --network addons-545041 --ip 192.168.49.2 --volume addons-545041:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0920 17:40:16.028367  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Running}}
	I0920 17:40:16.050658  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:16.074595  300442 cli_runner.go:164] Run: docker exec addons-545041 stat /var/lib/dpkg/alternatives/iptables
	I0920 17:40:16.141036  300442 oci.go:144] the created container "addons-545041" has a running status.
	I0920 17:40:16.141067  300442 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa...
	I0920 17:40:16.335683  300442 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 17:40:16.369276  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:16.392988  300442 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 17:40:16.393013  300442 kic_runner.go:114] Args: [docker exec --privileged addons-545041 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 17:40:16.482935  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:16.502689  300442 machine.go:93] provisionDockerMachine start ...
	I0920 17:40:16.502779  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:16.526123  300442 main.go:141] libmachine: Using SSH client type: native
	I0920 17:40:16.526391  300442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0920 17:40:16.526412  300442 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 17:40:16.526940  300442 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37480->127.0.0.1:33139: read: connection reset by peer
	I0920 17:40:19.662377  300442 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-545041
	
	I0920 17:40:19.662401  300442 ubuntu.go:169] provisioning hostname "addons-545041"
	I0920 17:40:19.662471  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:19.679329  300442 main.go:141] libmachine: Using SSH client type: native
	I0920 17:40:19.679575  300442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0920 17:40:19.679593  300442 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-545041 && echo "addons-545041" | sudo tee /etc/hostname
	I0920 17:40:19.823397  300442 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-545041
	
	I0920 17:40:19.823483  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:19.841175  300442 main.go:141] libmachine: Using SSH client type: native
	I0920 17:40:19.841436  300442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0920 17:40:19.841460  300442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-545041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-545041/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-545041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:40:19.975101  300442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:40:19.975127  300442 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19672-294290/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-294290/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-294290/.minikube}
	I0920 17:40:19.975152  300442 ubuntu.go:177] setting up certificates
	I0920 17:40:19.975161  300442 provision.go:84] configureAuth start
	I0920 17:40:19.975228  300442 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-545041
	I0920 17:40:19.991744  300442 provision.go:143] copyHostCerts
	I0920 17:40:19.991827  300442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-294290/.minikube/ca.pem (1082 bytes)
	I0920 17:40:19.991957  300442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-294290/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-294290/.minikube/cert.pem (1123 bytes)
	I0920 17:40:19.992022  300442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-294290/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-294290/.minikube/key.pem (1679 bytes)
	I0920 17:40:19.992072  300442 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-294290/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca-key.pem org=jenkins.addons-545041 san=[127.0.0.1 192.168.49.2 addons-545041 localhost minikube]
	I0920 17:40:20.750524  300442 provision.go:177] copyRemoteCerts
	I0920 17:40:20.750600  300442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:40:20.750641  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:20.766682  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:20.867977  300442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 17:40:20.892415  300442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 17:40:20.917084  300442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:40:20.940967  300442 provision.go:87] duration metric: took 965.789978ms to configureAuth
	I0920 17:40:20.941039  300442 ubuntu.go:193] setting minikube options for container-runtime
	I0920 17:40:20.941262  300442 config.go:182] Loaded profile config "addons-545041": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 17:40:20.941277  300442 machine.go:96] duration metric: took 4.438566794s to provisionDockerMachine
	I0920 17:40:20.941285  300442 client.go:171] duration metric: took 12.228731796s to LocalClient.Create
	I0920 17:40:20.941311  300442 start.go:167] duration metric: took 12.228802802s to libmachine.API.Create "addons-545041"
	I0920 17:40:20.941324  300442 start.go:293] postStartSetup for "addons-545041" (driver="docker")
	I0920 17:40:20.941334  300442 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:40:20.941394  300442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:40:20.941437  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:20.957610  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:21.053203  300442 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:40:21.056627  300442 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 17:40:21.056670  300442 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 17:40:21.056683  300442 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 17:40:21.056691  300442 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 17:40:21.056701  300442 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-294290/.minikube/addons for local assets ...
	I0920 17:40:21.056777  300442 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-294290/.minikube/files for local assets ...
	I0920 17:40:21.056805  300442 start.go:296] duration metric: took 115.474534ms for postStartSetup
	I0920 17:40:21.057129  300442 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-545041
	I0920 17:40:21.074314  300442 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/config.json ...
	I0920 17:40:21.074657  300442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:40:21.074709  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:21.092475  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:21.187832  300442 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 17:40:21.192468  300442 start.go:128] duration metric: took 12.482480272s to createHost
	I0920 17:40:21.192494  300442 start.go:83] releasing machines lock for "addons-545041", held for 12.482635291s
	I0920 17:40:21.192570  300442 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-545041
	I0920 17:40:21.212683  300442 ssh_runner.go:195] Run: cat /version.json
	I0920 17:40:21.212744  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:21.212992  300442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:40:21.213070  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:21.231494  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:21.232907  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:21.327392  300442 ssh_runner.go:195] Run: systemctl --version
	I0920 17:40:21.449990  300442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 17:40:21.454553  300442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 17:40:21.479374  300442 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 17:40:21.479477  300442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:40:21.507823  300442 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 17:40:21.507845  300442 start.go:495] detecting cgroup driver to use...
	I0920 17:40:21.507877  300442 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 17:40:21.507927  300442 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 17:40:21.520575  300442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 17:40:21.532332  300442 docker.go:217] disabling cri-docker service (if available) ...
	I0920 17:40:21.532396  300442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 17:40:21.546581  300442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 17:40:21.561569  300442 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 17:40:21.644489  300442 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 17:40:21.731957  300442 docker.go:233] disabling docker service ...
	I0920 17:40:21.732072  300442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 17:40:21.751815  300442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 17:40:21.764341  300442 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 17:40:21.852761  300442 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 17:40:21.946817  300442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 17:40:21.958880  300442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:40:21.976877  300442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 17:40:21.988042  300442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 17:40:21.998059  300442 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 17:40:21.998194  300442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 17:40:22.008362  300442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:40:22.024146  300442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 17:40:22.034763  300442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:40:22.044834  300442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:40:22.054529  300442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 17:40:22.064731  300442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 17:40:22.074695  300442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 17:40:22.085028  300442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:40:22.093753  300442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:40:22.102590  300442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:40:22.182337  300442 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 17:40:22.316311  300442 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0920 17:40:22.316465  300442 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0920 17:40:22.320161  300442 start.go:563] Will wait 60s for crictl version
	I0920 17:40:22.320289  300442 ssh_runner.go:195] Run: which crictl
	I0920 17:40:22.323726  300442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:40:22.363658  300442 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0920 17:40:22.363819  300442 ssh_runner.go:195] Run: containerd --version
	I0920 17:40:22.389868  300442 ssh_runner.go:195] Run: containerd --version
	I0920 17:40:22.417223  300442 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0920 17:40:22.419233  300442 cli_runner.go:164] Run: docker network inspect addons-545041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 17:40:22.434660  300442 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 17:40:22.438346  300442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:40:22.449327  300442 kubeadm.go:883] updating cluster {Name:addons-545041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-545041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:40:22.449458  300442 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 17:40:22.449526  300442 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:40:22.487632  300442 containerd.go:627] all images are preloaded for containerd runtime.
	I0920 17:40:22.487659  300442 containerd.go:534] Images already preloaded, skipping extraction
	I0920 17:40:22.487725  300442 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 17:40:22.528317  300442 containerd.go:627] all images are preloaded for containerd runtime.
	I0920 17:40:22.528340  300442 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:40:22.528349  300442 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0920 17:40:22.528443  300442 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-545041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-545041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:40:22.528521  300442 ssh_runner.go:195] Run: sudo crictl info
	I0920 17:40:22.565395  300442 cni.go:84] Creating CNI manager for ""
	I0920 17:40:22.565419  300442 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 17:40:22.565430  300442 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:40:22.565455  300442 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-545041 NodeName:addons-545041 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:40:22.565606  300442 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-545041"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:40:22.565678  300442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:40:22.574558  300442 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:40:22.574649  300442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 17:40:22.583305  300442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0920 17:40:22.601267  300442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:40:22.619675  300442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0920 17:40:22.637693  300442 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 17:40:22.641450  300442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:40:22.652375  300442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:40:22.741922  300442 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:40:22.757098  300442 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041 for IP: 192.168.49.2
	I0920 17:40:22.757124  300442 certs.go:194] generating shared ca certs ...
	I0920 17:40:22.757142  300442 certs.go:226] acquiring lock for ca certs: {Name:mke4cc07e532357ce4393d299e5243fb270e9472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:40:22.757274  300442 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-294290/.minikube/ca.key
	I0920 17:40:23.208745  300442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-294290/.minikube/ca.crt ...
	I0920 17:40:23.208781  300442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-294290/.minikube/ca.crt: {Name:mkbcdd0887a835f65cefd71ceb258bc8d1bfec37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:40:23.208984  300442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-294290/.minikube/ca.key ...
	I0920 17:40:23.208997  300442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-294290/.minikube/ca.key: {Name:mk8df730bd9cc21b2ccfb1a060d5c6cc0fc7ff86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:40:23.209089  300442 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-294290/.minikube/proxy-client-ca.key
	I0920 17:40:23.792533  300442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-294290/.minikube/proxy-client-ca.crt ...
	I0920 17:40:23.792565  300442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-294290/.minikube/proxy-client-ca.crt: {Name:mkb882e7130beec138f62ec5c92579ebe58424e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:40:23.792757  300442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-294290/.minikube/proxy-client-ca.key ...
	I0920 17:40:23.792771  300442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-294290/.minikube/proxy-client-ca.key: {Name:mkee4a290b2bef6fccd4bfa2836a48f551ff804c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:40:23.792862  300442 certs.go:256] generating profile certs ...
	I0920 17:40:23.792923  300442 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.key
	I0920 17:40:23.792953  300442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt with IP's: []
	I0920 17:40:24.300719  300442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt ...
	I0920 17:40:24.300754  300442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: {Name:mk32f45590b145a9b68f666d79c21d96f06395ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:40:24.300920  300442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.key ...
	I0920 17:40:24.300927  300442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.key: {Name:mk07023e95cda37d685ca581d21789aef9c8ab29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:40:24.301002  300442 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/apiserver.key.ed95d28a
	I0920 17:40:24.301018  300442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/apiserver.crt.ed95d28a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 17:40:24.565858  300442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/apiserver.crt.ed95d28a ...
	I0920 17:40:24.565891  300442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/apiserver.crt.ed95d28a: {Name:mk4546824be67816530868984bcd9623cafc6524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:40:24.566089  300442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/apiserver.key.ed95d28a ...
	I0920 17:40:24.566105  300442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/apiserver.key.ed95d28a: {Name:mked42f943e1661fb0f8eb11936530131fc8250e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:40:24.566192  300442 certs.go:381] copying /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/apiserver.crt.ed95d28a -> /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/apiserver.crt
	I0920 17:40:24.566272  300442 certs.go:385] copying /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/apiserver.key.ed95d28a -> /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/apiserver.key
	I0920 17:40:24.566327  300442 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/proxy-client.key
	I0920 17:40:24.566350  300442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/proxy-client.crt with IP's: []
	I0920 17:40:25.105712  300442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/proxy-client.crt ...
	I0920 17:40:25.105745  300442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/proxy-client.crt: {Name:mk270b3edb5407994ccd0c5ec73fdf58175fc4dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:40:25.105936  300442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/proxy-client.key ...
	I0920 17:40:25.105952  300442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/proxy-client.key: {Name:mk56294c7d8c96082ec68f9df747b546f076434a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:40:25.106877  300442 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:40:25.106928  300442 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:40:25.106956  300442 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-294290/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:40:25.107000  300442 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-294290/.minikube/certs/key.pem (1679 bytes)
	I0920 17:40:25.107795  300442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:40:25.136800  300442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:40:25.163635  300442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:40:25.191825  300442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:40:25.217195  300442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 17:40:25.242954  300442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:40:25.268287  300442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:40:25.293386  300442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 17:40:25.317806  300442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:40:25.342690  300442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:40:25.360412  300442 ssh_runner.go:195] Run: openssl version
	I0920 17:40:25.365853  300442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:40:25.375525  300442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:40:25.379177  300442 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:40:25.379239  300442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:40:25.386243  300442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:40:25.395688  300442 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:40:25.398844  300442 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:40:25.398894  300442 kubeadm.go:392] StartCluster: {Name:addons-545041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-545041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:40:25.398988  300442 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0920 17:40:25.399084  300442 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 17:40:25.435625  300442 cri.go:89] found id: ""
	I0920 17:40:25.435720  300442 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 17:40:25.445040  300442 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 17:40:25.454275  300442 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 17:40:25.454353  300442 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 17:40:25.463719  300442 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 17:40:25.463740  300442 kubeadm.go:157] found existing configuration files:
	
	I0920 17:40:25.463834  300442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 17:40:25.473184  300442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 17:40:25.473257  300442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 17:40:25.482234  300442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 17:40:25.491701  300442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 17:40:25.491771  300442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 17:40:25.500460  300442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 17:40:25.509732  300442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 17:40:25.509802  300442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 17:40:25.518391  300442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 17:40:25.527256  300442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 17:40:25.527347  300442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 17:40:25.535999  300442 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 17:40:25.575637  300442 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 17:40:25.575698  300442 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 17:40:25.594005  300442 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 17:40:25.594123  300442 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0920 17:40:25.594167  300442 kubeadm.go:310] OS: Linux
	I0920 17:40:25.594218  300442 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 17:40:25.594271  300442 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 17:40:25.594322  300442 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 17:40:25.594373  300442 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 17:40:25.594424  300442 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 17:40:25.594477  300442 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 17:40:25.594526  300442 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 17:40:25.594577  300442 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 17:40:25.594627  300442 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 17:40:25.657303  300442 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 17:40:25.657426  300442 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 17:40:25.658084  300442 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 17:40:25.667372  300442 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 17:40:25.670886  300442 out.go:235]   - Generating certificates and keys ...
	I0920 17:40:25.671093  300442 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 17:40:25.671181  300442 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 17:40:25.919027  300442 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 17:40:27.011530  300442 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 17:40:27.746846  300442 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 17:40:28.443221  300442 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 17:40:28.882054  300442 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 17:40:28.882390  300442 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-545041 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 17:40:29.397601  300442 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 17:40:29.397990  300442 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-545041 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 17:40:29.952043  300442 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 17:40:30.345369  300442 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 17:40:30.545906  300442 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 17:40:30.546150  300442 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 17:40:31.202766  300442 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 17:40:31.456550  300442 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 17:40:32.145605  300442 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 17:40:32.361003  300442 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 17:40:32.771089  300442 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 17:40:32.771787  300442 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 17:40:32.774705  300442 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 17:40:32.776942  300442 out.go:235]   - Booting up control plane ...
	I0920 17:40:32.777049  300442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 17:40:32.777127  300442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 17:40:32.777863  300442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 17:40:32.790910  300442 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 17:40:32.797900  300442 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 17:40:32.797958  300442 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 17:40:32.899475  300442 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 17:40:32.899600  300442 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 17:40:33.895818  300442 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001050366s
	I0920 17:40:33.895914  300442 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 17:40:39.397318  300442 kubeadm.go:310] [api-check] The API server is healthy after 5.501348065s
	I0920 17:40:39.418360  300442 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 17:40:39.433190  300442 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 17:40:39.457935  300442 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 17:40:39.458154  300442 kubeadm.go:310] [mark-control-plane] Marking the node addons-545041 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 17:40:39.468936  300442 kubeadm.go:310] [bootstrap-token] Using token: mkhp4v.69idas2xohna6tb1
	I0920 17:40:39.470936  300442 out.go:235]   - Configuring RBAC rules ...
	I0920 17:40:39.471089  300442 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 17:40:39.475782  300442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 17:40:39.485173  300442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 17:40:39.488910  300442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 17:40:39.492382  300442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 17:40:39.496097  300442 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 17:40:39.806486  300442 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 17:40:40.230683  300442 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 17:40:40.803939  300442 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 17:40:40.805001  300442 kubeadm.go:310] 
	I0920 17:40:40.805080  300442 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 17:40:40.805091  300442 kubeadm.go:310] 
	I0920 17:40:40.805168  300442 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 17:40:40.805179  300442 kubeadm.go:310] 
	I0920 17:40:40.805206  300442 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 17:40:40.805268  300442 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 17:40:40.805322  300442 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 17:40:40.805330  300442 kubeadm.go:310] 
	I0920 17:40:40.805384  300442 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 17:40:40.805392  300442 kubeadm.go:310] 
	I0920 17:40:40.805439  300442 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 17:40:40.805447  300442 kubeadm.go:310] 
	I0920 17:40:40.805499  300442 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 17:40:40.805577  300442 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 17:40:40.805648  300442 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 17:40:40.805665  300442 kubeadm.go:310] 
	I0920 17:40:40.805753  300442 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 17:40:40.805831  300442 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 17:40:40.805840  300442 kubeadm.go:310] 
	I0920 17:40:40.805923  300442 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mkhp4v.69idas2xohna6tb1 \
	I0920 17:40:40.806033  300442 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:009f2f277f0d6f558e51fa5debabc49a410f86e580baa788473ea962bfaa3d61 \
	I0920 17:40:40.806061  300442 kubeadm.go:310] 	--control-plane 
	I0920 17:40:40.806070  300442 kubeadm.go:310] 
	I0920 17:40:40.806165  300442 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 17:40:40.806175  300442 kubeadm.go:310] 
	I0920 17:40:40.806256  300442 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mkhp4v.69idas2xohna6tb1 \
	I0920 17:40:40.806362  300442 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:009f2f277f0d6f558e51fa5debabc49a410f86e580baa788473ea962bfaa3d61 
	I0920 17:40:40.809365  300442 kubeadm.go:310] W0920 17:40:25.571253    1018 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:40:40.809667  300442 kubeadm.go:310] W0920 17:40:25.573091    1018 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:40:40.809878  300442 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0920 17:40:40.809982  300442 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 17:40:40.809998  300442 cni.go:84] Creating CNI manager for ""
	I0920 17:40:40.810005  300442 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 17:40:40.812340  300442 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 17:40:40.814187  300442 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 17:40:40.818539  300442 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 17:40:40.818555  300442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 17:40:40.836381  300442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 17:40:41.110993  300442 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 17:40:41.111147  300442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:40:41.111234  300442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-545041 minikube.k8s.io/updated_at=2024_09_20T17_40_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=addons-545041 minikube.k8s.io/primary=true
	I0920 17:40:41.118290  300442 ops.go:34] apiserver oom_adj: -16
	I0920 17:40:41.240775  300442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:40:41.741787  300442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:40:42.241746  300442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:40:42.740937  300442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:40:43.241711  300442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:40:43.740927  300442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:40:44.241339  300442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:40:44.741076  300442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:40:44.839266  300442 kubeadm.go:1113] duration metric: took 3.728164375s to wait for elevateKubeSystemPrivileges
	I0920 17:40:44.839296  300442 kubeadm.go:394] duration metric: took 19.440406395s to StartCluster
	I0920 17:40:44.839313  300442 settings.go:142] acquiring lock: {Name:mk4f88389204d2653ab82e878e61c50b8437ae37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:40:44.839430  300442 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-294290/kubeconfig
	I0920 17:40:44.839807  300442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-294290/kubeconfig: {Name:mk99ef3647d0cf66fbb7a624c924e5cee2350dae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:40:44.839989  300442 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0920 17:40:44.840176  300442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 17:40:44.840440  300442 config.go:182] Loaded profile config "addons-545041": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 17:40:44.840556  300442 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 17:40:44.840638  300442 addons.go:69] Setting yakd=true in profile "addons-545041"
	I0920 17:40:44.840651  300442 addons.go:234] Setting addon yakd=true in "addons-545041"
	I0920 17:40:44.840675  300442 host.go:66] Checking if "addons-545041" exists ...
	I0920 17:40:44.841177  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:44.841730  300442 addons.go:69] Setting metrics-server=true in profile "addons-545041"
	I0920 17:40:44.841746  300442 addons.go:234] Setting addon metrics-server=true in "addons-545041"
	I0920 17:40:44.841772  300442 host.go:66] Checking if "addons-545041" exists ...
	I0920 17:40:44.842198  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:44.842357  300442 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-545041"
	I0920 17:40:44.842371  300442 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-545041"
	I0920 17:40:44.842391  300442 host.go:66] Checking if "addons-545041" exists ...
	I0920 17:40:44.842785  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:44.849922  300442 addons.go:69] Setting registry=true in profile "addons-545041"
	I0920 17:40:44.850011  300442 addons.go:234] Setting addon registry=true in "addons-545041"
	I0920 17:40:44.850080  300442 host.go:66] Checking if "addons-545041" exists ...
	I0920 17:40:44.850623  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:44.854213  300442 addons.go:69] Setting cloud-spanner=true in profile "addons-545041"
	I0920 17:40:44.854299  300442 addons.go:234] Setting addon cloud-spanner=true in "addons-545041"
	I0920 17:40:44.854347  300442 host.go:66] Checking if "addons-545041" exists ...
	I0920 17:40:44.859061  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:44.864307  300442 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-545041"
	I0920 17:40:44.864373  300442 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-545041"
	I0920 17:40:44.864405  300442 host.go:66] Checking if "addons-545041" exists ...
	I0920 17:40:44.866815  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:44.871212  300442 addons.go:69] Setting storage-provisioner=true in profile "addons-545041"
	I0920 17:40:44.871251  300442 addons.go:234] Setting addon storage-provisioner=true in "addons-545041"
	I0920 17:40:44.871286  300442 host.go:66] Checking if "addons-545041" exists ...
	I0920 17:40:44.871795  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:44.876508  300442 addons.go:69] Setting default-storageclass=true in profile "addons-545041"
	I0920 17:40:44.876590  300442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-545041"
	I0920 17:40:44.876995  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:44.887187  300442 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-545041"
	I0920 17:40:44.887219  300442 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-545041"
	I0920 17:40:44.887571  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:44.891199  300442 addons.go:69] Setting gcp-auth=true in profile "addons-545041"
	I0920 17:40:44.891277  300442 mustload.go:65] Loading cluster: addons-545041
	I0920 17:40:44.891507  300442 config.go:182] Loaded profile config "addons-545041": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 17:40:44.891817  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:44.907107  300442 addons.go:69] Setting ingress=true in profile "addons-545041"
	I0920 17:40:44.907142  300442 addons.go:234] Setting addon ingress=true in "addons-545041"
	I0920 17:40:44.907188  300442 host.go:66] Checking if "addons-545041" exists ...
	I0920 17:40:44.907845  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:44.909404  300442 addons.go:69] Setting volcano=true in profile "addons-545041"
	I0920 17:40:44.909428  300442 addons.go:234] Setting addon volcano=true in "addons-545041"
	I0920 17:40:44.909465  300442 host.go:66] Checking if "addons-545041" exists ...
	I0920 17:40:44.909987  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:44.922274  300442 addons.go:69] Setting volumesnapshots=true in profile "addons-545041"
	I0920 17:40:44.922304  300442 addons.go:234] Setting addon volumesnapshots=true in "addons-545041"
	I0920 17:40:44.922350  300442 host.go:66] Checking if "addons-545041" exists ...
	I0920 17:40:44.922816  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:44.941175  300442 addons.go:69] Setting ingress-dns=true in profile "addons-545041"
	I0920 17:40:44.941268  300442 addons.go:234] Setting addon ingress-dns=true in "addons-545041"
	I0920 17:40:44.941344  300442 host.go:66] Checking if "addons-545041" exists ...
	I0920 17:40:44.941880  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:44.942231  300442 out.go:177] * Verifying Kubernetes components...
	I0920 17:40:44.953167  300442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:40:44.964299  300442 addons.go:69] Setting inspektor-gadget=true in profile "addons-545041"
	I0920 17:40:44.964376  300442 addons.go:234] Setting addon inspektor-gadget=true in "addons-545041"
	I0920 17:40:44.964443  300442 host.go:66] Checking if "addons-545041" exists ...
	I0920 17:40:44.965102  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:45.064577  300442 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 17:40:45.066942  300442 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 17:40:45.069504  300442 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 17:40:45.070302  300442 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 17:40:45.070335  300442 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 17:40:45.070420  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:45.072100  300442 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:40:45.072129  300442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 17:40:45.072211  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:45.089979  300442 addons.go:234] Setting addon default-storageclass=true in "addons-545041"
	I0920 17:40:45.090042  300442 host.go:66] Checking if "addons-545041" exists ...
	I0920 17:40:45.090541  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:45.094394  300442 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-545041"
	I0920 17:40:45.094453  300442 host.go:66] Checking if "addons-545041" exists ...
	I0920 17:40:45.094953  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:45.098242  300442 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 17:40:45.127246  300442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 17:40:45.127697  300442 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 17:40:45.129823  300442 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 17:40:45.129852  300442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 17:40:45.129942  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:45.145752  300442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 17:40:45.148371  300442 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 17:40:45.151235  300442 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 17:40:45.153175  300442 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:40:45.155207  300442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 17:40:45.158496  300442 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 17:40:45.179646  300442 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 17:40:45.159185  300442 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:40:45.159235  300442 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 17:40:45.180595  300442 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 17:40:45.186870  300442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 17:40:45.189585  300442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 17:40:45.189921  300442 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 17:40:45.189985  300442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 17:40:45.190714  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:45.197162  300442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 17:40:45.201539  300442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 17:40:45.180719  300442 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 17:40:45.202062  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:45.203442  300442 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 17:40:45.203474  300442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 17:40:45.203553  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:45.180830  300442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 17:40:45.216846  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:45.220706  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:45.180945  300442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 17:40:45.231915  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:45.239367  300442 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 17:40:45.241363  300442 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 17:40:45.248489  300442 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 17:40:45.251648  300442 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 17:40:45.251680  300442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 17:40:45.251759  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:45.257279  300442 host.go:66] Checking if "addons-545041" exists ...
	I0920 17:40:45.266182  300442 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 17:40:45.266207  300442 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 17:40:45.266283  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:45.285773  300442 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 17:40:45.285838  300442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 17:40:45.288770  300442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 17:40:45.288819  300442 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 17:40:45.292910  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:45.299185  300442 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 17:40:45.305540  300442 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 17:40:45.305569  300442 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 17:40:45.305659  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:45.306121  300442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:40:45.314558  300442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:40:45.335467  300442 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 17:40:45.335499  300442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 17:40:45.335581  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:45.373149  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:45.374573  300442 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 17:40:45.376544  300442 out.go:177]   - Using image docker.io/busybox:stable
	I0920 17:40:45.378437  300442 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:40:45.378464  300442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 17:40:45.378534  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:45.414513  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:45.463218  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:45.499712  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:45.515773  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:45.527297  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:45.530870  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:45.535145  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:45.542680  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:45.549887  300442 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:40:45.553137  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:45.564235  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:45.568721  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:45.579125  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:46.161218  300442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:40:46.168609  300442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 17:40:46.292869  300442 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 17:40:46.292935  300442 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 17:40:46.330492  300442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:40:46.368346  300442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 17:40:46.380604  300442 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 17:40:46.380678  300442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 17:40:46.386420  300442 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 17:40:46.386486  300442 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 17:40:46.395769  300442 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 17:40:46.395845  300442 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 17:40:46.397295  300442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 17:40:46.408025  300442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:40:46.463400  300442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 17:40:46.474302  300442 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 17:40:46.474369  300442 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 17:40:46.486355  300442 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 17:40:46.486556  300442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 17:40:46.486534  300442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:40:46.582971  300442 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 17:40:46.583068  300442 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 17:40:46.652118  300442 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 17:40:46.652185  300442 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 17:40:46.654419  300442 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 17:40:46.654487  300442 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 17:40:46.671129  300442 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:40:46.671196  300442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 17:40:46.797769  300442 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 17:40:46.797847  300442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 17:40:46.820556  300442 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 17:40:46.820632  300442 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 17:40:46.826566  300442 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 17:40:46.826671  300442 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 17:40:46.884283  300442 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 17:40:46.884358  300442 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 17:40:46.922117  300442 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:40:46.922184  300442 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 17:40:46.971294  300442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:40:46.997979  300442 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:40:46.998053  300442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 17:40:47.017215  300442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:40:47.070094  300442 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 17:40:47.070161  300442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 17:40:47.103556  300442 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.976271934s)
	I0920 17:40:47.103657  300442 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 17:40:47.103634  300442 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.553718851s)
	I0920 17:40:47.108006  300442 node_ready.go:35] waiting up to 6m0s for node "addons-545041" to be "Ready" ...
	I0920 17:40:47.126560  300442 node_ready.go:49] node "addons-545041" has status "Ready":"True"
	I0920 17:40:47.129532  300442 node_ready.go:38] duration metric: took 21.449108ms for node "addons-545041" to be "Ready" ...
	I0920 17:40:47.129572  300442 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:40:47.158020  300442 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-47bxz" in "kube-system" namespace to be "Ready" ...
	I0920 17:40:47.186826  300442 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 17:40:47.186891  300442 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 17:40:47.233465  300442 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 17:40:47.233536  300442 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 17:40:47.391532  300442 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 17:40:47.391604  300442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 17:40:47.402729  300442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:40:47.549889  300442 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 17:40:47.549915  300442 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 17:40:47.608264  300442 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-545041" context rescaled to 1 replicas
	I0920 17:40:47.613395  300442 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 17:40:47.613425  300442 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 17:40:47.661624  300442 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-47bxz" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-47bxz" not found
	I0920 17:40:47.661656  300442 pod_ready.go:82] duration metric: took 503.52724ms for pod "coredns-7c65d6cfc9-47bxz" in "kube-system" namespace to be "Ready" ...
	E0920 17:40:47.661668  300442 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-47bxz" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-47bxz" not found
	I0920 17:40:47.661675  300442 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gjmdv" in "kube-system" namespace to be "Ready" ...
	I0920 17:40:47.701386  300442 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:40:47.701420  300442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 17:40:47.721124  300442 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 17:40:47.721154  300442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 17:40:47.857405  300442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.696090536s)
	I0920 17:40:47.937294  300442 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 17:40:47.937334  300442 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 17:40:48.026611  300442 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:40:48.026649  300442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 17:40:48.049869  300442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:40:48.091849  300442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 17:40:48.091877  300442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 17:40:48.230572  300442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:40:48.321226  300442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 17:40:48.321259  300442 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 17:40:48.497681  300442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.328992124s)
	I0920 17:40:48.541283  300442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 17:40:48.541316  300442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 17:40:48.805445  300442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 17:40:48.805489  300442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 17:40:49.171577  300442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:40:49.171605  300442 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 17:40:49.554160  300442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:40:49.701276  300442 pod_ready.go:103] pod "coredns-7c65d6cfc9-gjmdv" in "kube-system" namespace has status "Ready":"False"
	I0920 17:40:50.533788  300442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.203209266s)
	I0920 17:40:52.172555  300442 pod_ready.go:103] pod "coredns-7c65d6cfc9-gjmdv" in "kube-system" namespace has status "Ready":"False"
	I0920 17:40:52.531269  300442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 17:40:52.531411  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:52.557928  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:53.026878  300442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.658449659s)
	I0920 17:40:53.026911  300442 addons.go:475] Verifying addon ingress=true in "addons-545041"
	I0920 17:40:53.027250  300442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.629806084s)
	I0920 17:40:53.027299  300442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.619204746s)
	I0920 17:40:53.028951  300442 out.go:177] * Verifying ingress addon...
	I0920 17:40:53.031557  300442 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 17:40:53.036208  300442 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 17:40:53.036233  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:40:53.155089  300442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 17:40:53.330814  300442 addons.go:234] Setting addon gcp-auth=true in "addons-545041"
	I0920 17:40:53.330907  300442 host.go:66] Checking if "addons-545041" exists ...
	I0920 17:40:53.331412  300442 cli_runner.go:164] Run: docker container inspect addons-545041 --format={{.State.Status}}
	I0920 17:40:53.356181  300442 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 17:40:53.356239  300442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545041
	I0920 17:40:53.386941  300442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/addons-545041/id_rsa Username:docker}
	I0920 17:40:53.536886  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:40:54.068805  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:40:54.190323  300442 pod_ready.go:103] pod "coredns-7c65d6cfc9-gjmdv" in "kube-system" namespace has status "Ready":"False"
	I0920 17:40:54.546697  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:40:55.063216  300442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.599732362s)
	I0920 17:40:55.063318  300442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.576675835s)
	I0920 17:40:55.063572  300442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.092205864s)
	I0920 17:40:55.063622  300442 addons.go:475] Verifying addon metrics-server=true in "addons-545041"
	I0920 17:40:55.063709  300442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.046379044s)
	I0920 17:40:55.063743  300442 addons.go:475] Verifying addon registry=true in "addons-545041"
	I0920 17:40:55.063930  300442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.661128777s)
	I0920 17:40:55.064233  300442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.014319604s)
	W0920 17:40:55.064270  300442 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:40:55.064303  300442 retry.go:31] will retry after 284.560988ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:40:55.064329  300442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.833724836s)
	I0920 17:40:55.066222  300442 out.go:177] * Verifying registry addon...
	I0920 17:40:55.066289  300442 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-545041 service yakd-dashboard -n yakd-dashboard
	
	I0920 17:40:55.069536  300442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 17:40:55.085960  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:40:55.093130  300442 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 17:40:55.093211  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:40:55.349856  300442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:40:55.536640  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:40:55.605902  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:40:55.762492  300442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.208271628s)
	I0920 17:40:55.762606  300442 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.406404748s)
	I0920 17:40:55.762823  300442 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-545041"
	I0920 17:40:55.764981  300442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:40:55.765125  300442 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 17:40:55.768850  300442 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 17:40:55.769914  300442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 17:40:55.770933  300442 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 17:40:55.770954  300442 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 17:40:55.782479  300442 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 17:40:55.782574  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:40:55.819229  300442 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 17:40:55.819356  300442 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 17:40:55.868976  300442 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:40:55.869070  300442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 17:40:55.894691  300442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:40:56.037598  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:40:56.073572  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:40:56.299207  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:40:56.537093  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:40:56.574226  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:40:56.669022  300442 pod_ready.go:103] pod "coredns-7c65d6cfc9-gjmdv" in "kube-system" namespace has status "Ready":"False"
	I0920 17:40:56.775245  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:40:57.039957  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:40:57.149766  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:40:57.183440  300442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.833488208s)
	I0920 17:40:57.183566  300442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.288803234s)
	I0920 17:40:57.189381  300442 addons.go:475] Verifying addon gcp-auth=true in "addons-545041"
	I0920 17:40:57.191293  300442 out.go:177] * Verifying gcp-auth addon...
	I0920 17:40:57.194391  300442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 17:40:57.239945  300442 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 17:40:57.274810  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:40:57.536446  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:40:57.574629  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:40:57.775275  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:40:58.036060  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:40:58.075802  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:40:58.300348  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:40:58.538621  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:40:58.596880  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:40:58.669457  300442 pod_ready.go:103] pod "coredns-7c65d6cfc9-gjmdv" in "kube-system" namespace has status "Ready":"False"
	I0920 17:40:58.774825  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:40:59.036587  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:40:59.073381  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:40:59.276076  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:40:59.538960  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:40:59.579582  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:40:59.668860  300442 pod_ready.go:93] pod "coredns-7c65d6cfc9-gjmdv" in "kube-system" namespace has status "Ready":"True"
	I0920 17:40:59.668888  300442 pod_ready.go:82] duration metric: took 12.00716596s for pod "coredns-7c65d6cfc9-gjmdv" in "kube-system" namespace to be "Ready" ...
	I0920 17:40:59.668900  300442 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-545041" in "kube-system" namespace to be "Ready" ...
	I0920 17:40:59.675125  300442 pod_ready.go:93] pod "etcd-addons-545041" in "kube-system" namespace has status "Ready":"True"
	I0920 17:40:59.675150  300442 pod_ready.go:82] duration metric: took 6.241446ms for pod "etcd-addons-545041" in "kube-system" namespace to be "Ready" ...
	I0920 17:40:59.675164  300442 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-545041" in "kube-system" namespace to be "Ready" ...
	I0920 17:40:59.684662  300442 pod_ready.go:93] pod "kube-apiserver-addons-545041" in "kube-system" namespace has status "Ready":"True"
	I0920 17:40:59.684733  300442 pod_ready.go:82] duration metric: took 9.559481ms for pod "kube-apiserver-addons-545041" in "kube-system" namespace to be "Ready" ...
	I0920 17:40:59.684762  300442 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-545041" in "kube-system" namespace to be "Ready" ...
	I0920 17:40:59.690589  300442 pod_ready.go:93] pod "kube-controller-manager-addons-545041" in "kube-system" namespace has status "Ready":"True"
	I0920 17:40:59.690661  300442 pod_ready.go:82] duration metric: took 5.876253ms for pod "kube-controller-manager-addons-545041" in "kube-system" namespace to be "Ready" ...
	I0920 17:40:59.690689  300442 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4djxc" in "kube-system" namespace to be "Ready" ...
	I0920 17:40:59.698052  300442 pod_ready.go:93] pod "kube-proxy-4djxc" in "kube-system" namespace has status "Ready":"True"
	I0920 17:40:59.698126  300442 pod_ready.go:82] duration metric: took 7.405167ms for pod "kube-proxy-4djxc" in "kube-system" namespace to be "Ready" ...
	I0920 17:40:59.698151  300442 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-545041" in "kube-system" namespace to be "Ready" ...
	I0920 17:40:59.775624  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:00.059838  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:00.095442  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:00.096045  300442 pod_ready.go:93] pod "kube-scheduler-addons-545041" in "kube-system" namespace has status "Ready":"True"
	I0920 17:41:00.096109  300442 pod_ready.go:82] duration metric: took 397.935735ms for pod "kube-scheduler-addons-545041" in "kube-system" namespace to be "Ready" ...
	I0920 17:41:00.096238  300442 pod_ready.go:39] duration metric: took 12.966624747s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:41:00.096282  300442 api_server.go:52] waiting for apiserver process to appear ...
	I0920 17:41:00.096402  300442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:41:00.261420  300442 api_server.go:72] duration metric: took 15.421401007s to wait for apiserver process to appear ...
	I0920 17:41:00.261447  300442 api_server.go:88] waiting for apiserver healthz status ...
	I0920 17:41:00.261476  300442 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 17:41:00.300411  300442 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 17:41:00.301142  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:00.312797  300442 api_server.go:141] control plane version: v1.31.1
	I0920 17:41:00.312835  300442 api_server.go:131] duration metric: took 51.37554ms to wait for apiserver health ...
	I0920 17:41:00.312847  300442 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 17:41:00.335443  300442 system_pods.go:59] 18 kube-system pods found
	I0920 17:41:00.335518  300442 system_pods.go:61] "coredns-7c65d6cfc9-gjmdv" [527dc137-db5a-4a4b-9968-8ce459f2a4e2] Running
	I0920 17:41:00.335530  300442 system_pods.go:61] "csi-hostpath-attacher-0" [e4d0af06-9870-446c-8ae1-ff2a4e0cb522] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:41:00.335541  300442 system_pods.go:61] "csi-hostpath-resizer-0" [98437b93-0e38-43bd-89b8-bc6739bd4d2d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:41:00.335554  300442 system_pods.go:61] "csi-hostpathplugin-xlxzb" [2b0fb640-85d1-4e09-ad04-d25addfe3032] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:41:00.335562  300442 system_pods.go:61] "etcd-addons-545041" [016df0ad-25a0-4c99-bce7-ed6e43b94adf] Running
	I0920 17:41:00.335568  300442 system_pods.go:61] "kindnet-kkkg6" [13f7dc04-de27-4af0-8d80-a6ef937898a6] Running
	I0920 17:41:00.335575  300442 system_pods.go:61] "kube-apiserver-addons-545041" [8677d166-4e84-487c-a376-48712d36b1a2] Running
	I0920 17:41:00.335581  300442 system_pods.go:61] "kube-controller-manager-addons-545041" [6f4d7da7-367c-463e-8651-bc93c7249bcf] Running
	I0920 17:41:00.335585  300442 system_pods.go:61] "kube-ingress-dns-minikube" [d05c31b1-8d7d-4e8a-9a61-e280dd7bf5a9] Running
	I0920 17:41:00.335590  300442 system_pods.go:61] "kube-proxy-4djxc" [cd07ed65-642d-404d-ae79-d247cabc5ca3] Running
	I0920 17:41:00.335594  300442 system_pods.go:61] "kube-scheduler-addons-545041" [0d0662ce-f1a8-4163-826a-68d09649652e] Running
	I0920 17:41:00.335601  300442 system_pods.go:61] "metrics-server-84c5f94fbc-dmqpl" [c269bb8d-903d-4258-9fd2-fff102c918ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 17:41:00.335609  300442 system_pods.go:61] "nvidia-device-plugin-daemonset-7lb5s" [b1b02761-6700-494a-9306-c64f38225c4a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0920 17:41:00.335618  300442 system_pods.go:61] "registry-66c9cd494c-n7tg6" [33f18f34-de1d-4cf8-8ecc-e9e8a5dcbaff] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 17:41:00.335626  300442 system_pods.go:61] "registry-proxy-qshpk" [8a023632-d1b9-4e68-b4bd-f9572a1acfe5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 17:41:00.335633  300442 system_pods.go:61] "snapshot-controller-56fcc65765-6c6lm" [89eb3c4e-6a86-4cfc-9931-b99cbf83aeb9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:41:00.335641  300442 system_pods.go:61] "snapshot-controller-56fcc65765-pnvr5" [81b85b64-8b92-45e2-8f6e-406e77734da3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:41:00.335646  300442 system_pods.go:61] "storage-provisioner" [77510664-d871-4708-83a1-3b33919e154e] Running
	I0920 17:41:00.335655  300442 system_pods.go:74] duration metric: took 22.799506ms to wait for pod list to return data ...
	I0920 17:41:00.335667  300442 default_sa.go:34] waiting for default service account to be created ...
	I0920 17:41:00.465824  300442 default_sa.go:45] found service account: "default"
	I0920 17:41:00.465911  300442 default_sa.go:55] duration metric: took 130.235795ms for default service account to be created ...
	I0920 17:41:00.465938  300442 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 17:41:00.536005  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:00.573426  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:00.673261  300442 system_pods.go:86] 18 kube-system pods found
	I0920 17:41:00.673338  300442 system_pods.go:89] "coredns-7c65d6cfc9-gjmdv" [527dc137-db5a-4a4b-9968-8ce459f2a4e2] Running
	I0920 17:41:00.673369  300442 system_pods.go:89] "csi-hostpath-attacher-0" [e4d0af06-9870-446c-8ae1-ff2a4e0cb522] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:41:00.673407  300442 system_pods.go:89] "csi-hostpath-resizer-0" [98437b93-0e38-43bd-89b8-bc6739bd4d2d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:41:00.673434  300442 system_pods.go:89] "csi-hostpathplugin-xlxzb" [2b0fb640-85d1-4e09-ad04-d25addfe3032] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:41:00.673456  300442 system_pods.go:89] "etcd-addons-545041" [016df0ad-25a0-4c99-bce7-ed6e43b94adf] Running
	I0920 17:41:00.673482  300442 system_pods.go:89] "kindnet-kkkg6" [13f7dc04-de27-4af0-8d80-a6ef937898a6] Running
	I0920 17:41:00.673514  300442 system_pods.go:89] "kube-apiserver-addons-545041" [8677d166-4e84-487c-a376-48712d36b1a2] Running
	I0920 17:41:00.673541  300442 system_pods.go:89] "kube-controller-manager-addons-545041" [6f4d7da7-367c-463e-8651-bc93c7249bcf] Running
	I0920 17:41:00.673562  300442 system_pods.go:89] "kube-ingress-dns-minikube" [d05c31b1-8d7d-4e8a-9a61-e280dd7bf5a9] Running
	I0920 17:41:00.673585  300442 system_pods.go:89] "kube-proxy-4djxc" [cd07ed65-642d-404d-ae79-d247cabc5ca3] Running
	I0920 17:41:00.673620  300442 system_pods.go:89] "kube-scheduler-addons-545041" [0d0662ce-f1a8-4163-826a-68d09649652e] Running
	I0920 17:41:00.673647  300442 system_pods.go:89] "metrics-server-84c5f94fbc-dmqpl" [c269bb8d-903d-4258-9fd2-fff102c918ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 17:41:00.673670  300442 system_pods.go:89] "nvidia-device-plugin-daemonset-7lb5s" [b1b02761-6700-494a-9306-c64f38225c4a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0920 17:41:00.673696  300442 system_pods.go:89] "registry-66c9cd494c-n7tg6" [33f18f34-de1d-4cf8-8ecc-e9e8a5dcbaff] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 17:41:00.673730  300442 system_pods.go:89] "registry-proxy-qshpk" [8a023632-d1b9-4e68-b4bd-f9572a1acfe5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 17:41:00.673759  300442 system_pods.go:89] "snapshot-controller-56fcc65765-6c6lm" [89eb3c4e-6a86-4cfc-9931-b99cbf83aeb9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:41:00.673808  300442 system_pods.go:89] "snapshot-controller-56fcc65765-pnvr5" [81b85b64-8b92-45e2-8f6e-406e77734da3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:41:00.673842  300442 system_pods.go:89] "storage-provisioner" [77510664-d871-4708-83a1-3b33919e154e] Running
	I0920 17:41:00.673868  300442 system_pods.go:126] duration metric: took 207.909781ms to wait for k8s-apps to be running ...
	I0920 17:41:00.673890  300442 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 17:41:00.673977  300442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:41:00.687086  300442 system_svc.go:56] duration metric: took 13.187176ms WaitForService to wait for kubelet
	I0920 17:41:00.687113  300442 kubeadm.go:582] duration metric: took 15.847101427s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:41:00.687132  300442 node_conditions.go:102] verifying NodePressure condition ...
	I0920 17:41:00.776176  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:00.865628  300442 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 17:41:00.865665  300442 node_conditions.go:123] node cpu capacity is 2
	I0920 17:41:00.865680  300442 node_conditions.go:105] duration metric: took 178.541782ms to run NodePressure ...
	I0920 17:41:00.865693  300442 start.go:241] waiting for startup goroutines ...
	I0920 17:41:01.037564  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:01.073215  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:01.275593  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:01.536853  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:01.573673  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:01.775535  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:02.037795  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:02.138023  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:02.275211  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:02.536408  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:02.573722  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:02.774347  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:03.042282  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:03.074376  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:03.275853  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:03.537383  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:03.573678  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:03.774963  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:04.036928  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:04.074013  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:04.276588  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:04.538639  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:04.574397  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:04.774895  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:05.036912  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:05.074079  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:05.275604  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:05.536172  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:05.638625  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:05.774747  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:06.036870  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:06.073911  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:06.275588  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:06.536733  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:06.574446  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:06.775204  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:07.036551  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:07.077096  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:07.275804  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:07.537176  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:07.573330  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:07.777347  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:08.037424  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:08.074514  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:08.279547  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:08.536157  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:08.574091  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:08.774878  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:09.037313  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:09.074882  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:09.301271  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:09.536822  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:09.574119  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:09.775553  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:10.061257  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:10.073526  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:10.275431  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:10.537284  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:10.573737  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:10.777345  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:11.037837  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:11.073505  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:11.276421  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:11.537184  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:11.574924  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:11.774628  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:12.036967  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:12.137855  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:12.276021  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:12.536462  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:12.573866  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:12.774656  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:13.038197  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:13.137660  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:13.274828  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:13.536407  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:13.573934  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:13.774860  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:14.036590  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:14.073329  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:14.275006  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:14.538106  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:14.577718  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:14.779881  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:15.046591  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:15.093144  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:15.275608  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:15.538658  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:15.575374  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:15.814269  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:16.039505  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:16.074640  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:16.275297  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:16.536976  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:16.573519  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:16.781221  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:17.040853  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:17.075956  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:17.275269  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:17.537052  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:17.573552  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:17.791501  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:18.037663  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:18.074685  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:18.276006  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:18.537423  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:18.573497  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:18.775434  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:19.037034  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:19.073582  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:19.275421  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:19.538379  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:19.574417  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:19.775177  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:20.037790  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:20.073813  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:20.275352  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:20.536508  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:20.573815  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:20.803200  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:21.037279  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:21.073983  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:21.274240  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:21.537218  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:21.574257  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:21.775227  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:22.036744  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:22.073404  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:22.300592  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:22.536286  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:22.573919  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:22.778762  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:23.036110  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:23.073667  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:23.275377  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:23.536171  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:23.573843  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:23.777060  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:24.038016  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:24.074046  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:24.275103  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:24.536930  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:24.573671  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:24.775113  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:25.037512  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:25.073339  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:25.274950  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:25.536332  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:25.574243  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:25.775688  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:26.037995  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:26.074206  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:26.274984  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:26.536809  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:26.573865  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:41:26.775160  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:27.037097  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:27.075350  300442 kapi.go:107] duration metric: took 32.005811857s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 17:41:27.274954  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:27.536277  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:27.850345  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:28.035926  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:28.301752  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:28.535623  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:28.800449  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:29.037260  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:29.301682  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:29.537675  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:29.781614  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:30.040822  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:30.275069  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:30.537642  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:30.786832  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:31.036513  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:31.277045  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:31.537807  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:31.775652  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:32.037063  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:32.274361  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:32.537002  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:32.806250  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:33.038048  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:33.275558  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:33.537657  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:33.774862  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:34.035965  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:34.274945  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:34.536127  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:34.775630  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:35.036978  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:35.275390  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:35.536824  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:35.775153  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:36.037733  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:36.300516  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:36.537051  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:36.775822  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:37.038509  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:37.277772  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:37.537031  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:37.776005  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:38.036933  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:38.275191  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:38.535970  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:38.776093  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:39.037856  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:39.301152  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:39.538721  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:39.775353  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:40.049177  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:40.274326  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:40.537652  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:40.774737  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:41.036944  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:41.274673  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:41.536493  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:41.774707  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:42.045380  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:42.302834  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:42.538125  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:42.775431  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:43.040489  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:43.274526  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:43.535851  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:43.775581  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:44.040107  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:44.274707  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:44.536793  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:44.774562  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:45.057933  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:45.283820  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:45.536542  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:45.806806  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:41:46.036680  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:46.274959  300442 kapi.go:107] duration metric: took 50.505040357s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 17:41:46.535585  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:47.036024  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:47.536671  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:48.036789  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:48.535375  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:49.036905  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:49.536025  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:50.036570  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:50.536047  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:51.037379  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:51.537060  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:52.037081  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:52.536984  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:53.036052  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:53.536348  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:54.037007  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:54.536462  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:55.036393  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:55.536397  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:56.036657  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:56.535639  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:57.036497  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:57.536071  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:58.036040  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:58.536477  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:59.036746  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:41:59.535840  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:42:00.067724  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:42:00.538468  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:42:01.037031  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:42:01.537210  300442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:42:02.047955  300442 kapi.go:107] duration metric: took 1m9.016395266s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 17:42:20.209247  300442 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 17:42:20.209268  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:20.698002  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:21.198019  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:21.698005  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:22.198072  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:22.697985  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:23.197702  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:23.698520  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:24.198854  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:24.698415  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:25.198225  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:25.702177  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:26.198638  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:26.697759  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:27.198995  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:27.697991  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:28.198146  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:28.697522  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:29.198704  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:29.699432  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:30.208276  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:30.698499  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:31.198916  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:31.698663  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:32.198483  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:32.698435  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:33.198440  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:33.698794  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:34.198176  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:34.698045  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:35.198087  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:35.698568  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:36.199298  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:36.698019  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:37.198365  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:37.698503  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:38.197819  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:38.698239  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:39.197984  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:39.698150  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:40.198013  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:40.697840  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:41.198390  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:41.698339  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:42.199919  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:42.697760  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:43.198596  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:43.697577  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:44.198060  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:44.697966  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:45.199568  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:45.697830  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:46.198373  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:46.697701  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:47.198215  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:47.698067  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:48.198193  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:48.698135  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:49.197411  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:49.698325  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:50.198132  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:50.698072  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:51.198595  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:51.697892  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:52.198675  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:52.698609  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:53.198739  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:53.697805  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:54.198040  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:54.698220  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:55.198741  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:55.699175  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:56.201518  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:56.698274  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:57.209246  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:57.698454  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:58.198039  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:58.697949  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:59.197724  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:42:59.698716  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:00.212113  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:00.698478  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:01.199277  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:01.698372  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:02.198549  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:02.698035  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:03.198429  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:03.698657  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:04.198429  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:04.698073  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:05.197636  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:05.698366  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:06.198638  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:06.698820  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:07.197999  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:07.698253  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:08.198951  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:08.697561  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:09.197643  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:09.698293  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:10.198762  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:10.698467  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:11.198839  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:11.698619  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:12.198903  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:12.698134  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:13.198397  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:13.697734  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:14.198912  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:14.697414  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:15.198503  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:15.698092  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:16.198142  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:16.697949  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:17.198453  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:17.698124  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:18.198840  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:18.697994  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:19.198007  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:19.697797  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:20.198912  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:20.697733  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:21.198322  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:21.697797  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:22.199060  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:22.698575  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:23.198446  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:23.697539  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:24.198784  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:24.698825  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:25.197924  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:25.698254  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:26.199218  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:26.697657  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:27.198285  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:27.698638  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:28.198768  300442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:43:28.698064  300442 kapi.go:107] duration metric: took 2m31.503667434s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 17:43:28.700318  300442 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-545041 cluster.
	I0920 17:43:28.702054  300442 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 17:43:28.704406  300442 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 17:43:28.706727  300442 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner-rancher, ingress-dns, storage-provisioner, volcano, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 17:43:28.708810  300442 addons.go:510] duration metric: took 2m43.868252492s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner-rancher ingress-dns storage-provisioner volcano metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 17:43:28.708886  300442 start.go:246] waiting for cluster config update ...
	I0920 17:43:28.708907  300442 start.go:255] writing updated cluster config ...
	I0920 17:43:28.709202  300442 ssh_runner.go:195] Run: rm -f paused
	I0920 17:43:29.087770  300442 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 17:43:29.093116  300442 out.go:177] * Done! kubectl is now configured to use "addons-545041" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	8ac041ee0f529       4f725bf50aaa5       2 minutes ago       Exited              gadget                                   5                   b00bcd54e6111       gadget-p64mf
	e7a5a88dd7562       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   f4a3408bcc300       gcp-auth-89d5ffd79-b64tz
	36acda9fa6f8a       8b46b1cd48760       4 minutes ago       Running             admission                                0                   dca48f427ee53       volcano-admission-77d7d48b68-zt55r
	eb061c303a093       289a818c8d9c5       4 minutes ago       Running             controller                               0                   72b5e4625d490       ingress-nginx-controller-bc57996ff-6l76k
	536bac2f436fb       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   dad9772662eda       csi-hostpathplugin-xlxzb
	9af3a2386cfd9       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   dad9772662eda       csi-hostpathplugin-xlxzb
	96c0c6aaf0a09       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   dad9772662eda       csi-hostpathplugin-xlxzb
	46e65fe07ebbd       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   dad9772662eda       csi-hostpathplugin-xlxzb
	b15c21d3e4f58       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   dad9772662eda       csi-hostpathplugin-xlxzb
	5d6a43a6da6b5       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   6d6482b89bb9a       csi-hostpath-attacher-0
	479f01ff3d1ee       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   dad9772662eda       csi-hostpathplugin-xlxzb
	952d10e52217f       420193b27261a       5 minutes ago       Exited              patch                                    2                   811f46cadfd79       ingress-nginx-admission-patch-55dqc
	a2d0866261747       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   d3c03b80727d5       volcano-scheduler-576bc46687-hrx4m
	d0e46508b1334       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   8fbbc1b1b4199       csi-hostpath-resizer-0
	bdcd5ae0cfba6       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   4d28e5e1f2dcc       snapshot-controller-56fcc65765-6c6lm
	79d6ca9d77743       420193b27261a       5 minutes ago       Exited              create                                   0                   eea0805092af4       ingress-nginx-admission-create-r5fmc
	69aa3430934dc       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   1e6e1c4ddb224       local-path-provisioner-86d989889c-5rgb7
	8dc9c27a0fdad       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   be8e0b80c37f6       volcano-controllers-56675bb4d5-nrv6p
	0d1f3a160b815       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   973cc51fa288b       registry-proxy-qshpk
	21d03eb3531a5       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   5978bbccce4c9       snapshot-controller-56fcc65765-pnvr5
	e72b203a51f6c       77bdba588b953       5 minutes ago       Running             yakd                                     0                   3ed9e54663fae       yakd-dashboard-67d98fc6b-k954s
	0921bc0c01815       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   5ef52af9d1d9c       metrics-server-84c5f94fbc-dmqpl
	848bdfe2fa377       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   45e6311a368e7       registry-66c9cd494c-n7tg6
	8464d5cf0a849       8be4bcf8ec607       5 minutes ago       Running             cloud-spanner-emulator                   0                   23ba6783a4854       cloud-spanner-emulator-769b77f747-82z2s
	156a9570820ca       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   34830f07d4385       nvidia-device-plugin-daemonset-7lb5s
	0e2dce4a7a03f       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   3642902314af5       kube-ingress-dns-minikube
	002a9a789a31a       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   43ae4f8f2daa3       coredns-7c65d6cfc9-gjmdv
	d3934388408f0       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   f512071bca579       storage-provisioner
	5de2e3c00abc5       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   47586bdde5133       kindnet-kkkg6
	25eacda557f85       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   df5d406415f5d       kube-proxy-4djxc
	21792095ea7d7       27e3830e14027       6 minutes ago       Running             etcd                                     0                   42c051f78d7d2       etcd-addons-545041
	113b40fd112fa       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   ab5e257f3eed1       kube-controller-manager-addons-545041
	b3ed2a3ce0245       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   39433ed40b395       kube-scheduler-addons-545041
	3b0abad0d3cd4       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   c55e2113b1755       kube-apiserver-addons-545041
	
	
	==> containerd <==
	Sep 20 17:44:20 addons-545041 containerd[813]: time="2024-09-20T17:44:20.319830752Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 20 17:44:20 addons-545041 containerd[813]: time="2024-09-20T17:44:20.323432475Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 131.627642ms"
	Sep 20 17:44:20 addons-545041 containerd[813]: time="2024-09-20T17:44:20.323476840Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
	Sep 20 17:44:20 addons-545041 containerd[813]: time="2024-09-20T17:44:20.325477853Z" level=info msg="CreateContainer within sandbox \"b00bcd54e611173e67c0172e3773d1672ac36b554ab511a34690af22d42b3801\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Sep 20 17:44:20 addons-545041 containerd[813]: time="2024-09-20T17:44:20.353106191Z" level=info msg="CreateContainer within sandbox \"b00bcd54e611173e67c0172e3773d1672ac36b554ab511a34690af22d42b3801\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8\""
	Sep 20 17:44:20 addons-545041 containerd[813]: time="2024-09-20T17:44:20.354291731Z" level=info msg="StartContainer for \"8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8\""
	Sep 20 17:44:20 addons-545041 containerd[813]: time="2024-09-20T17:44:20.417640687Z" level=info msg="StartContainer for \"8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8\" returns successfully"
	Sep 20 17:44:21 addons-545041 containerd[813]: time="2024-09-20T17:44:21.874447225Z" level=error msg="ExecSync for \"8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8\" failed" error="failed to exec in container: failed to start exec \"b49d29447e7b9534697816c58a428c3985eae9935c8ef108d03f0d1b545f6e2d\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
	Sep 20 17:44:21 addons-545041 containerd[813]: time="2024-09-20T17:44:21.906636611Z" level=error msg="ExecSync for \"8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8\" failed" error="failed to exec in container: failed to start exec \"0c593a4e56506bc1061a5a2243453e1c7adb9b1a298495262a6d68c135cd96a7\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
	Sep 20 17:44:21 addons-545041 containerd[813]: time="2024-09-20T17:44:21.949226450Z" level=error msg="ExecSync for \"8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8\" failed" error="failed to exec in container: failed to start exec \"45f1c7dfb8b10e7ba294aa911c01dffa3a4dc38f3e8adc59635a8d0a81500bcb\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
	Sep 20 17:44:22 addons-545041 containerd[813]: time="2024-09-20T17:44:22.043989657Z" level=info msg="shim disconnected" id=8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8 namespace=k8s.io
	Sep 20 17:44:22 addons-545041 containerd[813]: time="2024-09-20T17:44:22.044052410Z" level=warning msg="cleaning up after shim disconnected" id=8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8 namespace=k8s.io
	Sep 20 17:44:22 addons-545041 containerd[813]: time="2024-09-20T17:44:22.044063651Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 20 17:44:22 addons-545041 containerd[813]: time="2024-09-20T17:44:22.437250815Z" level=info msg="RemoveContainer for \"b30a0abf1b404873b332885b713120bef90c0398bbf13d67abd4fd0fcf671589\""
	Sep 20 17:44:22 addons-545041 containerd[813]: time="2024-09-20T17:44:22.445710013Z" level=info msg="RemoveContainer for \"b30a0abf1b404873b332885b713120bef90c0398bbf13d67abd4fd0fcf671589\" returns successfully"
	Sep 20 17:44:40 addons-545041 containerd[813]: time="2024-09-20T17:44:40.318707718Z" level=info msg="RemoveContainer for \"307dcb00df49bc10397758d72dff724e4a7095b06931eccbac69bdf449e5e8fc\""
	Sep 20 17:44:40 addons-545041 containerd[813]: time="2024-09-20T17:44:40.325770448Z" level=info msg="RemoveContainer for \"307dcb00df49bc10397758d72dff724e4a7095b06931eccbac69bdf449e5e8fc\" returns successfully"
	Sep 20 17:44:40 addons-545041 containerd[813]: time="2024-09-20T17:44:40.327891346Z" level=info msg="StopPodSandbox for \"319a14fbfef4a6c7673db02e636928b40abf54ee5f27a39fdd7fdc681ce9133a\""
	Sep 20 17:44:40 addons-545041 containerd[813]: time="2024-09-20T17:44:40.340690491Z" level=info msg="TearDown network for sandbox \"319a14fbfef4a6c7673db02e636928b40abf54ee5f27a39fdd7fdc681ce9133a\" successfully"
	Sep 20 17:44:40 addons-545041 containerd[813]: time="2024-09-20T17:44:40.340733642Z" level=info msg="StopPodSandbox for \"319a14fbfef4a6c7673db02e636928b40abf54ee5f27a39fdd7fdc681ce9133a\" returns successfully"
	Sep 20 17:44:40 addons-545041 containerd[813]: time="2024-09-20T17:44:40.341285107Z" level=info msg="RemovePodSandbox for \"319a14fbfef4a6c7673db02e636928b40abf54ee5f27a39fdd7fdc681ce9133a\""
	Sep 20 17:44:40 addons-545041 containerd[813]: time="2024-09-20T17:44:40.341420573Z" level=info msg="Forcibly stopping sandbox \"319a14fbfef4a6c7673db02e636928b40abf54ee5f27a39fdd7fdc681ce9133a\""
	Sep 20 17:44:40 addons-545041 containerd[813]: time="2024-09-20T17:44:40.365276590Z" level=info msg="TearDown network for sandbox \"319a14fbfef4a6c7673db02e636928b40abf54ee5f27a39fdd7fdc681ce9133a\" successfully"
	Sep 20 17:44:40 addons-545041 containerd[813]: time="2024-09-20T17:44:40.372532796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"319a14fbfef4a6c7673db02e636928b40abf54ee5f27a39fdd7fdc681ce9133a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 20 17:44:40 addons-545041 containerd[813]: time="2024-09-20T17:44:40.372747631Z" level=info msg="RemovePodSandbox \"319a14fbfef4a6c7673db02e636928b40abf54ee5f27a39fdd7fdc681ce9133a\" returns successfully"
	
	
	==> coredns [002a9a789a31ac0fe4f1df7659c9740326b57eaeaec44dd683e3b73dd263c4be] <==
	[INFO] 10.244.0.10:34416 - 12812 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000165358s
	[INFO] 10.244.0.10:55104 - 43338 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002162642s
	[INFO] 10.244.0.10:55104 - 3151 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001877326s
	[INFO] 10.244.0.10:57103 - 55736 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000146157s
	[INFO] 10.244.0.10:57103 - 1462 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000105123s
	[INFO] 10.244.0.10:47521 - 5539 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000082297s
	[INFO] 10.244.0.10:47521 - 38559 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000045785s
	[INFO] 10.244.0.10:41065 - 37284 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000045054s
	[INFO] 10.244.0.10:41065 - 62625 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038851s
	[INFO] 10.244.0.10:40336 - 47798 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042297s
	[INFO] 10.244.0.10:40336 - 43184 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038556s
	[INFO] 10.244.0.10:39436 - 10031 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.009785681s
	[INFO] 10.244.0.10:39436 - 21033 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.017306492s
	[INFO] 10.244.0.10:54680 - 41878 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000105862s
	[INFO] 10.244.0.10:54680 - 148 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130846s
	[INFO] 10.244.0.24:41743 - 30141 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000167811s
	[INFO] 10.244.0.24:55788 - 51109 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000374941s
	[INFO] 10.244.0.24:48262 - 869 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127434s
	[INFO] 10.244.0.24:34578 - 24817 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155355s
	[INFO] 10.244.0.24:50309 - 59613 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117736s
	[INFO] 10.244.0.24:53726 - 41304 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077678s
	[INFO] 10.244.0.24:33945 - 32374 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002370521s
	[INFO] 10.244.0.24:42767 - 22141 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002005254s
	[INFO] 10.244.0.24:48635 - 30260 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002679042s
	[INFO] 10.244.0.24:59675 - 42980 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001987712s
	
	
	==> describe nodes <==
	Name:               addons-545041
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-545041
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=addons-545041
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_40_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-545041
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-545041"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:40:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-545041
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:46:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:43:44 +0000   Fri, 20 Sep 2024 17:40:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:43:44 +0000   Fri, 20 Sep 2024 17:40:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:43:44 +0000   Fri, 20 Sep 2024 17:40:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:43:44 +0000   Fri, 20 Sep 2024 17:40:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-545041
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f64e274b1a24e3496e30adf952ed35d
	  System UUID:                789cdea3-3ecc-4646-af83-928344e4dd28
	  Boot ID:                    b363b069-6c72-47b0-a80b-36cf6b75e261
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-82z2s     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  gadget                      gadget-p64mf                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  gcp-auth                    gcp-auth-89d5ffd79-b64tz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-6l76k    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m55s
	  kube-system                 coredns-7c65d6cfc9-gjmdv                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m2s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpathplugin-xlxzb                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 etcd-addons-545041                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m7s
	  kube-system                 kindnet-kkkg6                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m2s
	  kube-system                 kube-apiserver-addons-545041                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-controller-manager-addons-545041       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-proxy-4djxc                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-scheduler-addons-545041                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 metrics-server-84c5f94fbc-dmqpl             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m57s
	  kube-system                 nvidia-device-plugin-daemonset-7lb5s        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 registry-66c9cd494c-n7tg6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 registry-proxy-qshpk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 snapshot-controller-56fcc65765-6c6lm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 snapshot-controller-56fcc65765-pnvr5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  local-path-storage          local-path-provisioner-86d989889c-5rgb7     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  volcano-system              volcano-admission-77d7d48b68-zt55r          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-controllers-56675bb4d5-nrv6p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-scheduler-576bc46687-hrx4m          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-k954s              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m1s                   kube-proxy       
	  Normal   NodeAllocatableEnforced  6m14s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 6m14s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m14s (x7 over 6m14s)  kubelet          Node addons-545041 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m14s (x7 over 6m14s)  kubelet          Node addons-545041 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m14s (x7 over 6m14s)  kubelet          Node addons-545041 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m14s                  kubelet          Starting kubelet.
	  Normal   Starting                 6m7s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m7s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m7s                   kubelet          Node addons-545041 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m7s                   kubelet          Node addons-545041 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m7s                   kubelet          Node addons-545041 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m3s                   node-controller  Node addons-545041 event: Registered Node addons-545041 in Controller
	
	
	==> dmesg <==
	[Sep20 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014701] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.494726] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.796639] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.667553] kauditd_printk_skb: 36 callbacks suppressed
	[Sep20 16:44] hrtimer: interrupt took 2333756 ns
	[Sep20 17:08] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [21792095ea7d773aa1c1e541ad13502fe12f5b46b1592035e7d0443b5e874f01] <==
	{"level":"info","ts":"2024-09-20T17:40:34.611246Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T17:40:34.611551Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-20T17:40:34.611678Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-20T17:40:34.616239Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T17:40:34.616374Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T17:40:34.877349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T17:40:34.877598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T17:40:34.877707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-20T17:40:34.877860Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T17:40:34.877950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T17:40:34.878054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-20T17:40:34.878140Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T17:40:34.879162Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:40:34.883245Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-545041 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T17:40:34.883414Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:40:34.883949Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:40:34.884166Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:40:34.884274Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:40:34.884403Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:40:34.885342Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:40:34.886548Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-20T17:40:34.891238Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T17:40:34.891416Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T17:40:34.895575Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:40:34.901648Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [e7a5a88dd7562a418553822f3ce7df7bad16268b16c3f0007cd978447c2edcac] <==
	2024/09/20 17:43:27 GCP Auth Webhook started!
	2024/09/20 17:43:45 Ready to marshal response ...
	2024/09/20 17:43:45 Ready to write response ...
	2024/09/20 17:43:45 Ready to marshal response ...
	2024/09/20 17:43:45 Ready to write response ...
	
	
	==> kernel <==
	 17:46:47 up  1:29,  0 users,  load average: 0.52, 1.32, 2.15
	Linux addons-545041 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [5de2e3c00abc539e8d6bdc0e2c19cd18117e1f2ac915f74c718d535718b753a6] <==
	I0920 17:44:46.912021       1 main.go:299] handling current node
	I0920 17:44:56.911447       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 17:44:56.911483       1 main.go:299] handling current node
	I0920 17:45:06.912021       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 17:45:06.912057       1 main.go:299] handling current node
	I0920 17:45:16.911985       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 17:45:16.912021       1 main.go:299] handling current node
	I0920 17:45:26.911696       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 17:45:26.911921       1 main.go:299] handling current node
	I0920 17:45:36.911667       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 17:45:36.911701       1 main.go:299] handling current node
	I0920 17:45:46.911468       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 17:45:46.911503       1 main.go:299] handling current node
	I0920 17:45:56.912307       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 17:45:56.912341       1 main.go:299] handling current node
	I0920 17:46:06.911440       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 17:46:06.911493       1 main.go:299] handling current node
	I0920 17:46:16.912351       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 17:46:16.912390       1 main.go:299] handling current node
	I0920 17:46:26.911472       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 17:46:26.911506       1 main.go:299] handling current node
	I0920 17:46:36.912487       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 17:46:36.912676       1 main.go:299] handling current node
	I0920 17:46:46.912963       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 17:46:46.913087       1 main.go:299] handling current node
	
	
	==> kube-apiserver [3b0abad0d3cd4f6b5637047ec29a3e2be56786e5a9d64716cc50570a894f3101] <==
	W0920 17:41:58.087515       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.187.30:443: connect: connection refused
	W0920 17:41:59.099752       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.187.30:443: connect: connection refused
	W0920 17:42:00.117803       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.187.30:443: connect: connection refused
	W0920 17:42:00.226609       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.6.202:443: connect: connection refused
	E0920 17:42:00.226660       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.6.202:443: connect: connection refused" logger="UnhandledError"
	W0920 17:42:00.250161       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.6.202:443: connect: connection refused
	E0920 17:42:00.250205       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.6.202:443: connect: connection refused" logger="UnhandledError"
	W0920 17:42:00.250563       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.110.187.30:443: connect: connection refused
	W0920 17:42:00.266447       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.110.187.30:443: connect: connection refused
	W0920 17:42:01.185076       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.187.30:443: connect: connection refused
	W0920 17:42:02.276876       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.187.30:443: connect: connection refused
	W0920 17:42:03.317943       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.187.30:443: connect: connection refused
	W0920 17:42:04.410030       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.187.30:443: connect: connection refused
	W0920 17:42:05.451858       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.187.30:443: connect: connection refused
	W0920 17:42:06.456017       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.187.30:443: connect: connection refused
	W0920 17:42:07.485925       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.187.30:443: connect: connection refused
	W0920 17:42:08.542123       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.187.30:443: connect: connection refused
	W0920 17:42:20.122074       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.6.202:443: connect: connection refused
	E0920 17:42:20.122221       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.6.202:443: connect: connection refused" logger="UnhandledError"
	W0920 17:43:00.315638       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.6.202:443: connect: connection refused
	E0920 17:43:00.316189       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.6.202:443: connect: connection refused" logger="UnhandledError"
	W0920 17:43:00.321025       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.6.202:443: connect: connection refused
	E0920 17:43:00.321083       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.6.202:443: connect: connection refused" logger="UnhandledError"
	I0920 17:43:45.666484       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0920 17:43:45.701909       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [113b40fd112faea63e0c044a3b2d7409cb891ed4dd4e9a1b84bf4807bb60dcbd] <==
	I0920 17:43:00.418166       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 17:43:00.426224       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 17:43:00.438452       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 17:43:00.455500       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 17:43:00.455912       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 17:43:00.466433       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 17:43:00.488618       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 17:43:01.203704       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 17:43:01.228825       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 17:43:02.355811       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 17:43:02.373891       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 17:43:03.370007       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 17:43:03.381367       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 17:43:03.383623       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 17:43:03.389123       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 17:43:03.393529       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 17:43:03.401433       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 17:43:28.316761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="14.215422ms"
	I0920 17:43:28.316936       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="44.997µs"
	I0920 17:43:33.025758       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0920 17:43:33.031475       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0920 17:43:33.071207       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0920 17:43:33.080536       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0920 17:43:44.107751       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-545041"
	I0920 17:43:45.384504       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [25eacda557f850092d416970da00fa2f3dfb49ea0faa68ca570a4e996b7a497d] <==
	I0920 17:40:46.448679       1 server_linux.go:66] "Using iptables proxy"
	I0920 17:40:46.543472       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 17:40:46.543564       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:40:46.582017       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 17:40:46.582083       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:40:46.586171       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:40:46.586756       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:40:46.586771       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:40:46.588158       1 config.go:199] "Starting service config controller"
	I0920 17:40:46.588187       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:40:46.588214       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:40:46.588219       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:40:46.588926       1 config.go:328] "Starting node config controller"
	I0920 17:40:46.588938       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:40:46.688890       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 17:40:46.688940       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:40:46.689166       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b3ed2a3ce024595cc3c44cee0b33966297ab689b5439c9417689d16dfacd2d33] <==
	W0920 17:40:37.667828       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 17:40:37.667964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:40:37.668146       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 17:40:37.668247       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:40:37.668455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 17:40:37.668556       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:40:37.668757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 17:40:37.668857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:40:37.669033       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 17:40:37.669126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:40:38.526739       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 17:40:38.526787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:40:38.543244       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:40:38.543289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:40:38.619430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:40:38.619700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:40:38.632428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:40:38.632684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:40:38.697017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 17:40:38.697252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:40:38.740392       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 17:40:38.740663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:40:38.969341       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 17:40:38.969602       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 17:40:40.852235       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 17:44:50 addons-545041 kubelet[1479]: I0920 17:44:50.189076    1479 scope.go:117] "RemoveContainer" containerID="8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8"
	Sep 20 17:44:50 addons-545041 kubelet[1479]: E0920 17:44:50.189256    1479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p64mf_gadget(0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c)\"" pod="gadget/gadget-p64mf" podUID="0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c"
	Sep 20 17:44:55 addons-545041 kubelet[1479]: I0920 17:44:55.187797    1479 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-7lb5s" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 17:45:01 addons-545041 kubelet[1479]: I0920 17:45:01.187807    1479 scope.go:117] "RemoveContainer" containerID="8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8"
	Sep 20 17:45:01 addons-545041 kubelet[1479]: E0920 17:45:01.188031    1479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p64mf_gadget(0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c)\"" pod="gadget/gadget-p64mf" podUID="0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c"
	Sep 20 17:45:02 addons-545041 kubelet[1479]: I0920 17:45:02.187951    1479 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-n7tg6" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 17:45:12 addons-545041 kubelet[1479]: I0920 17:45:12.188155    1479 scope.go:117] "RemoveContainer" containerID="8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8"
	Sep 20 17:45:12 addons-545041 kubelet[1479]: E0920 17:45:12.188359    1479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p64mf_gadget(0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c)\"" pod="gadget/gadget-p64mf" podUID="0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c"
	Sep 20 17:45:23 addons-545041 kubelet[1479]: I0920 17:45:23.188456    1479 scope.go:117] "RemoveContainer" containerID="8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8"
	Sep 20 17:45:23 addons-545041 kubelet[1479]: E0920 17:45:23.189072    1479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p64mf_gadget(0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c)\"" pod="gadget/gadget-p64mf" podUID="0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c"
	Sep 20 17:45:36 addons-545041 kubelet[1479]: I0920 17:45:36.188333    1479 scope.go:117] "RemoveContainer" containerID="8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8"
	Sep 20 17:45:36 addons-545041 kubelet[1479]: E0920 17:45:36.188959    1479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p64mf_gadget(0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c)\"" pod="gadget/gadget-p64mf" podUID="0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c"
	Sep 20 17:45:37 addons-545041 kubelet[1479]: I0920 17:45:37.188334    1479 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-qshpk" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 17:45:50 addons-545041 kubelet[1479]: I0920 17:45:50.191236    1479 scope.go:117] "RemoveContainer" containerID="8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8"
	Sep 20 17:45:50 addons-545041 kubelet[1479]: E0920 17:45:50.191476    1479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p64mf_gadget(0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c)\"" pod="gadget/gadget-p64mf" podUID="0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c"
	Sep 20 17:46:02 addons-545041 kubelet[1479]: I0920 17:46:02.188651    1479 scope.go:117] "RemoveContainer" containerID="8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8"
	Sep 20 17:46:02 addons-545041 kubelet[1479]: E0920 17:46:02.189357    1479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p64mf_gadget(0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c)\"" pod="gadget/gadget-p64mf" podUID="0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c"
	Sep 20 17:46:06 addons-545041 kubelet[1479]: I0920 17:46:06.188698    1479 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-n7tg6" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 17:46:13 addons-545041 kubelet[1479]: I0920 17:46:13.187938    1479 scope.go:117] "RemoveContainer" containerID="8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8"
	Sep 20 17:46:13 addons-545041 kubelet[1479]: E0920 17:46:13.188146    1479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p64mf_gadget(0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c)\"" pod="gadget/gadget-p64mf" podUID="0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c"
	Sep 20 17:46:18 addons-545041 kubelet[1479]: I0920 17:46:18.188190    1479 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-7lb5s" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 17:46:26 addons-545041 kubelet[1479]: I0920 17:46:26.188188    1479 scope.go:117] "RemoveContainer" containerID="8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8"
	Sep 20 17:46:26 addons-545041 kubelet[1479]: E0920 17:46:26.188831    1479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p64mf_gadget(0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c)\"" pod="gadget/gadget-p64mf" podUID="0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c"
	Sep 20 17:46:39 addons-545041 kubelet[1479]: I0920 17:46:39.188257    1479 scope.go:117] "RemoveContainer" containerID="8ac041ee0f5299dbb5e851f452d1108420770b4fc2ad91741febc5b7d2f20cd8"
	Sep 20 17:46:39 addons-545041 kubelet[1479]: E0920 17:46:39.188477    1479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-p64mf_gadget(0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c)\"" pod="gadget/gadget-p64mf" podUID="0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c"
	
	
	==> storage-provisioner [d3934388408f0b656b164319e3b379a6ae98ca112f1ef0ab1b6514597bfc903d] <==
	I0920 17:40:50.767028       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 17:40:50.778937       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 17:40:50.778983       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 17:40:50.794190       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 17:40:50.796092       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-545041_7da1a87b-113f-4309-bc34-608a37e0553e!
	I0920 17:40:50.806515       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7474383-6176-4330-9a27-a6639ad93d67", APIVersion:"v1", ResourceVersion:"574", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-545041_7da1a87b-113f-4309-bc34-608a37e0553e became leader
	I0920 17:40:50.896557       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-545041_7da1a87b-113f-4309-bc34-608a37e0553e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-545041 -n addons-545041
helpers_test.go:261: (dbg) Run:  kubectl --context addons-545041 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-r5fmc ingress-nginx-admission-patch-55dqc test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-545041 describe pod ingress-nginx-admission-create-r5fmc ingress-nginx-admission-patch-55dqc test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-545041 describe pod ingress-nginx-admission-create-r5fmc ingress-nginx-admission-patch-55dqc test-job-nginx-0: exit status 1 (102.319429ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-r5fmc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-55dqc" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-545041 describe pod ingress-nginx-admission-create-r5fmc ingress-nginx-admission-patch-55dqc test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (374.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-475170 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0920 18:31:00.179430  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-475170 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m10.795924283s)

                                                
                                                
-- stdout --
	* [old-k8s-version-475170] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-294290/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-294290/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-475170" primary control-plane node in "old-k8s-version-475170" cluster
	* Pulling base image v0.0.45-1726784731-19672 ...
	* Restarting existing docker container for "old-k8s-version-475170" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-475170 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:30:52.190850  506753 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:30:52.195180  506753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:30:52.195194  506753 out.go:358] Setting ErrFile to fd 2...
	I0920 18:30:52.195199  506753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:30:52.195470  506753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-294290/.minikube/bin
	I0920 18:30:52.195866  506753 out.go:352] Setting JSON to false
	I0920 18:30:52.196789  506753 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8003,"bootTime":1726849050,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0920 18:30:52.196850  506753 start.go:139] virtualization:  
	I0920 18:30:52.199352  506753 out.go:177] * [old-k8s-version-475170] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 18:30:52.201887  506753 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:30:52.202094  506753 notify.go:220] Checking for updates...
	I0920 18:30:52.206433  506753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:30:52.208871  506753 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-294290/kubeconfig
	I0920 18:30:52.211063  506753 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-294290/.minikube
	I0920 18:30:52.213117  506753 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 18:30:52.215396  506753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:30:52.218093  506753 config.go:182] Loaded profile config "old-k8s-version-475170": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0920 18:30:52.221094  506753 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 18:30:52.223456  506753 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:30:52.278912  506753 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 18:30:52.279115  506753 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:30:52.443924  506753 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:true NGoroutines:68 SystemTime:2024-09-20 18:30:52.383250926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 18:30:52.444037  506753 docker.go:318] overlay module found
	I0920 18:30:52.446447  506753 out.go:177] * Using the docker driver based on existing profile
	I0920 18:30:52.448457  506753 start.go:297] selected driver: docker
	I0920 18:30:52.448473  506753 start.go:901] validating driver "docker" against &{Name:old-k8s-version-475170 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-475170 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:30:52.448593  506753 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:30:52.449191  506753 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:30:52.526441  506753 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:68 SystemTime:2024-09-20 18:30:52.514070221 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 18:30:52.526779  506753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:30:52.526806  506753 cni.go:84] Creating CNI manager for ""
	I0920 18:30:52.526867  506753 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 18:30:52.526910  506753 start.go:340] cluster config:
	{Name:old-k8s-version-475170 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-475170 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:30:52.529381  506753 out.go:177] * Starting "old-k8s-version-475170" primary control-plane node in "old-k8s-version-475170" cluster
	I0920 18:30:52.531643  506753 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0920 18:30:52.534046  506753 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0920 18:30:52.536064  506753 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0920 18:30:52.536131  506753 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-294290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0920 18:30:52.536140  506753 cache.go:56] Caching tarball of preloaded images
	I0920 18:30:52.536222  506753 preload.go:172] Found /home/jenkins/minikube-integration/19672-294290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 18:30:52.536232  506753 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0920 18:30:52.536349  506753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/config.json ...
	I0920 18:30:52.536563  506753 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	W0920 18:30:52.570095  506753 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed is of wrong architecture
	I0920 18:30:52.570114  506753 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 18:30:52.570187  506753 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 18:30:52.570213  506753 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0920 18:30:52.570218  506753 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0920 18:30:52.570226  506753 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0920 18:30:52.570231  506753 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0920 18:30:52.708067  506753 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0920 18:30:52.708099  506753 cache.go:194] Successfully downloaded all kic artifacts
	I0920 18:30:52.708143  506753 start.go:360] acquireMachinesLock for old-k8s-version-475170: {Name:mk4d8656377382173c8e718b41822611d0eb711c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:30:52.708211  506753 start.go:364] duration metric: took 44.431µs to acquireMachinesLock for "old-k8s-version-475170"
	I0920 18:30:52.708236  506753 start.go:96] Skipping create...Using existing machine configuration
	I0920 18:30:52.708251  506753 fix.go:54] fixHost starting: 
	I0920 18:30:52.708525  506753 cli_runner.go:164] Run: docker container inspect old-k8s-version-475170 --format={{.State.Status}}
	I0920 18:30:52.747106  506753 fix.go:112] recreateIfNeeded on old-k8s-version-475170: state=Stopped err=<nil>
	W0920 18:30:52.747135  506753 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 18:30:52.749933  506753 out.go:177] * Restarting existing docker container for "old-k8s-version-475170" ...
	I0920 18:30:52.751910  506753 cli_runner.go:164] Run: docker start old-k8s-version-475170
	I0920 18:30:53.221024  506753 cli_runner.go:164] Run: docker container inspect old-k8s-version-475170 --format={{.State.Status}}
	I0920 18:30:53.248034  506753 kic.go:430] container "old-k8s-version-475170" state is running.
	I0920 18:30:53.248502  506753 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-475170
	I0920 18:30:53.291768  506753 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/config.json ...
	I0920 18:30:53.292006  506753 machine.go:93] provisionDockerMachine start ...
	I0920 18:30:53.292071  506753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-475170
	I0920 18:30:53.320414  506753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:30:53.320681  506753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I0920 18:30:53.320691  506753 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:30:53.321334  506753 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41976->127.0.0.1:33434: read: connection reset by peer
	I0920 18:30:56.471281  506753 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-475170
	
	I0920 18:30:56.471312  506753 ubuntu.go:169] provisioning hostname "old-k8s-version-475170"
	I0920 18:30:56.471383  506753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-475170
	I0920 18:30:56.491697  506753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:30:56.491955  506753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I0920 18:30:56.491979  506753 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-475170 && echo "old-k8s-version-475170" | sudo tee /etc/hostname
	I0920 18:30:56.644069  506753 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-475170
	
	I0920 18:30:56.644246  506753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-475170
	I0920 18:30:56.669435  506753 main.go:141] libmachine: Using SSH client type: native
	I0920 18:30:56.669685  506753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I0920 18:30:56.669702  506753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-475170' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-475170/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-475170' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:30:56.807413  506753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:30:56.807440  506753 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19672-294290/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-294290/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-294290/.minikube}
	I0920 18:30:56.807472  506753 ubuntu.go:177] setting up certificates
	I0920 18:30:56.807482  506753 provision.go:84] configureAuth start
	I0920 18:30:56.807557  506753 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-475170
	I0920 18:30:56.829942  506753 provision.go:143] copyHostCerts
	I0920 18:30:56.830012  506753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-294290/.minikube/ca.pem, removing ...
	I0920 18:30:56.830025  506753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-294290/.minikube/ca.pem
	I0920 18:30:56.830102  506753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-294290/.minikube/ca.pem (1082 bytes)
	I0920 18:30:56.830212  506753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-294290/.minikube/cert.pem, removing ...
	I0920 18:30:56.830224  506753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-294290/.minikube/cert.pem
	I0920 18:30:56.830252  506753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-294290/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-294290/.minikube/cert.pem (1123 bytes)
	I0920 18:30:56.830308  506753 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-294290/.minikube/key.pem, removing ...
	I0920 18:30:56.830317  506753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-294290/.minikube/key.pem
	I0920 18:30:56.830344  506753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-294290/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-294290/.minikube/key.pem (1679 bytes)
	I0920 18:30:56.830395  506753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-294290/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-475170 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-475170]
	I0920 18:30:57.277542  506753 provision.go:177] copyRemoteCerts
	I0920 18:30:57.277611  506753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:30:57.277665  506753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-475170
	I0920 18:30:57.300914  506753 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/old-k8s-version-475170/id_rsa Username:docker}
	I0920 18:30:57.397307  506753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:30:57.431099  506753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 18:30:57.461533  506753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:30:57.492149  506753 provision.go:87] duration metric: took 684.647041ms to configureAuth
	I0920 18:30:57.492225  506753 ubuntu.go:193] setting minikube options for container-runtime
	I0920 18:30:57.492491  506753 config.go:182] Loaded profile config "old-k8s-version-475170": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0920 18:30:57.492538  506753 machine.go:96] duration metric: took 4.200515986s to provisionDockerMachine
	I0920 18:30:57.492559  506753 start.go:293] postStartSetup for "old-k8s-version-475170" (driver="docker")
	I0920 18:30:57.492585  506753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:30:57.492683  506753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:30:57.492745  506753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-475170
	I0920 18:30:57.514671  506753 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/old-k8s-version-475170/id_rsa Username:docker}
	I0920 18:30:57.617273  506753 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:30:57.620622  506753 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 18:30:57.620659  506753 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 18:30:57.620671  506753 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 18:30:57.620679  506753 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 18:30:57.620689  506753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-294290/.minikube/addons for local assets ...
	I0920 18:30:57.620747  506753 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-294290/.minikube/files for local assets ...
	I0920 18:30:57.620826  506753 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-294290/.minikube/files/etc/ssl/certs/2996842.pem -> 2996842.pem in /etc/ssl/certs
	I0920 18:30:57.620937  506753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 18:30:57.633496  506753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/files/etc/ssl/certs/2996842.pem --> /etc/ssl/certs/2996842.pem (1708 bytes)
	I0920 18:30:57.674548  506753 start.go:296] duration metric: took 181.958617ms for postStartSetup
	I0920 18:30:57.674646  506753 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:30:57.674696  506753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-475170
	I0920 18:30:57.697135  506753 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/old-k8s-version-475170/id_rsa Username:docker}
	I0920 18:30:57.792522  506753 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 18:30:57.799409  506753 fix.go:56] duration metric: took 5.091151507s for fixHost
	I0920 18:30:57.799439  506753 start.go:83] releasing machines lock for "old-k8s-version-475170", held for 5.09121878s
	I0920 18:30:57.799515  506753 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-475170
	I0920 18:30:57.821518  506753 ssh_runner.go:195] Run: cat /version.json
	I0920 18:30:57.821912  506753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-475170
	I0920 18:30:57.822403  506753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:30:57.822477  506753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-475170
	I0920 18:30:57.869004  506753 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/old-k8s-version-475170/id_rsa Username:docker}
	I0920 18:30:57.872070  506753 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/old-k8s-version-475170/id_rsa Username:docker}
	I0920 18:30:57.962584  506753 ssh_runner.go:195] Run: systemctl --version
	I0920 18:30:58.094893  506753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 18:30:58.101044  506753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 18:30:58.126998  506753 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 18:30:58.127159  506753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:30:58.138935  506753 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 18:30:58.138980  506753 start.go:495] detecting cgroup driver to use...
	I0920 18:30:58.139055  506753 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 18:30:58.139130  506753 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 18:30:58.157805  506753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 18:30:58.175391  506753 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:30:58.175460  506753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:30:58.195874  506753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:30:58.212852  506753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:30:58.328293  506753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:30:58.433176  506753 docker.go:233] disabling docker service ...
	I0920 18:30:58.433244  506753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:30:58.447674  506753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:30:58.460052  506753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:30:58.579674  506753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:30:58.701900  506753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:30:58.715888  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:30:58.742478  506753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0920 18:30:58.761191  506753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 18:30:58.776812  506753 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 18:30:58.776923  506753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 18:30:58.788422  506753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 18:30:58.801925  506753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 18:30:58.817212  506753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 18:30:58.828889  506753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:30:58.840839  506753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 18:30:58.852291  506753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:30:58.861835  506753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:30:58.871435  506753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:30:58.985002  506753 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 18:30:59.237967  506753 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0920 18:30:59.238072  506753 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0920 18:30:59.242857  506753 start.go:563] Will wait 60s for crictl version
	I0920 18:30:59.242936  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:30:59.246557  506753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:30:59.310754  506753 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0920 18:30:59.310873  506753 ssh_runner.go:195] Run: containerd --version
	I0920 18:30:59.342178  506753 ssh_runner.go:195] Run: containerd --version
	I0920 18:30:59.374527  506753 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I0920 18:30:59.377165  506753 cli_runner.go:164] Run: docker network inspect old-k8s-version-475170 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 18:30:59.402085  506753 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0920 18:30:59.406468  506753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:30:59.421290  506753 kubeadm.go:883] updating cluster {Name:old-k8s-version-475170 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-475170 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:30:59.421406  506753 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0920 18:30:59.421465  506753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:30:59.465328  506753 containerd.go:627] all images are preloaded for containerd runtime.
	I0920 18:30:59.465396  506753 containerd.go:534] Images already preloaded, skipping extraction
	I0920 18:30:59.465493  506753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:30:59.517030  506753 containerd.go:627] all images are preloaded for containerd runtime.
	I0920 18:30:59.517052  506753 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:30:59.517063  506753 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0920 18:30:59.517175  506753 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-475170 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-475170 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:30:59.517245  506753 ssh_runner.go:195] Run: sudo crictl info
	I0920 18:30:59.574302  506753 cni.go:84] Creating CNI manager for ""
	I0920 18:30:59.574331  506753 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 18:30:59.574341  506753 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:30:59.574363  506753 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-475170 NodeName:old-k8s-version-475170 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 18:30:59.574490  506753 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-475170"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:30:59.574612  506753 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 18:30:59.588557  506753 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:30:59.588699  506753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:30:59.602604  506753 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0920 18:30:59.621491  506753 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:30:59.652381  506753 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0920 18:30:59.674066  506753 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0920 18:30:59.678107  506753 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:30:59.689236  506753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:30:59.801830  506753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:30:59.818812  506753 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170 for IP: 192.168.85.2
	I0920 18:30:59.818888  506753 certs.go:194] generating shared ca certs ...
	I0920 18:30:59.818921  506753 certs.go:226] acquiring lock for ca certs: {Name:mke4cc07e532357ce4393d299e5243fb270e9472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:30:59.819142  506753 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-294290/.minikube/ca.key
	I0920 18:30:59.819230  506753 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-294290/.minikube/proxy-client-ca.key
	I0920 18:30:59.819265  506753 certs.go:256] generating profile certs ...
	I0920 18:30:59.819405  506753 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.key
	I0920 18:30:59.819512  506753 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/apiserver.key.0afed5f1
	I0920 18:30:59.819588  506753 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/proxy-client.key
	I0920 18:30:59.819734  506753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-294290/.minikube/certs/299684.pem (1338 bytes)
	W0920 18:30:59.819792  506753 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-294290/.minikube/certs/299684_empty.pem, impossibly tiny 0 bytes
	I0920 18:30:59.819819  506753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:30:59.819879  506753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:30:59.819928  506753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-294290/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:30:59.819985  506753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-294290/.minikube/certs/key.pem (1679 bytes)
	I0920 18:30:59.820055  506753 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-294290/.minikube/files/etc/ssl/certs/2996842.pem (1708 bytes)
	I0920 18:30:59.820718  506753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:30:59.856306  506753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 18:30:59.890380  506753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:30:59.933467  506753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:30:59.968346  506753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 18:31:00.038471  506753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:31:00.133448  506753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:31:00.177149  506753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:31:00.259875  506753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:31:00.299996  506753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/certs/299684.pem --> /usr/share/ca-certificates/299684.pem (1338 bytes)
	I0920 18:31:00.332560  506753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-294290/.minikube/files/etc/ssl/certs/2996842.pem --> /usr/share/ca-certificates/2996842.pem (1708 bytes)
	I0920 18:31:00.382964  506753 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:31:00.427799  506753 ssh_runner.go:195] Run: openssl version
	I0920 18:31:00.438089  506753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:31:00.455315  506753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:31:00.468089  506753 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:31:00.468256  506753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:31:00.485538  506753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:31:00.498378  506753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/299684.pem && ln -fs /usr/share/ca-certificates/299684.pem /etc/ssl/certs/299684.pem"
	I0920 18:31:00.512009  506753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299684.pem
	I0920 18:31:00.518008  506753 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:50 /usr/share/ca-certificates/299684.pem
	I0920 18:31:00.518179  506753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299684.pem
	I0920 18:31:00.530038  506753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/299684.pem /etc/ssl/certs/51391683.0"
	I0920 18:31:00.542659  506753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2996842.pem && ln -fs /usr/share/ca-certificates/2996842.pem /etc/ssl/certs/2996842.pem"
	I0920 18:31:00.555566  506753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2996842.pem
	I0920 18:31:00.561482  506753 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:50 /usr/share/ca-certificates/2996842.pem
	I0920 18:31:00.561779  506753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2996842.pem
	I0920 18:31:00.571637  506753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2996842.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 18:31:00.584448  506753 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:31:00.592744  506753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 18:31:00.600978  506753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 18:31:00.609193  506753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 18:31:00.617819  506753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 18:31:00.626992  506753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 18:31:00.635765  506753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 18:31:00.647360  506753 kubeadm.go:392] StartCluster: {Name:old-k8s-version-475170 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-475170 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:31:00.647519  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0920 18:31:00.647631  506753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:31:00.709092  506753 cri.go:89] found id: "3e5b0a7d9d673d78cc6bc4fb42b865a56a854a554083c93d5469d52ce1613872"
	I0920 18:31:00.709167  506753 cri.go:89] found id: "8a4ee0253af3725554a381aa7b6a8f89dbafb548f3de5993565a3866b25f01fc"
	I0920 18:31:00.709186  506753 cri.go:89] found id: "e126978f873c21258f64888c111ebc9d582eb0398c12b5d85458a1909b5cbe76"
	I0920 18:31:00.709206  506753 cri.go:89] found id: "93d93c2e410820672b1234f1012c02eadac7b65e873d66423584e1bf387e045e"
	I0920 18:31:00.709238  506753 cri.go:89] found id: "233df6b6eb986baede4db039e732ee1858dd85728e913ffd728598b107989bfd"
	I0920 18:31:00.709264  506753 cri.go:89] found id: "e9fb3ff5b999ec2b982365086ff02d308b31e23e032a6a83b361204f78b0edf6"
	I0920 18:31:00.709284  506753 cri.go:89] found id: "3537612de5ba307587c7b835c7e961d7204cd3b56b96f9297584d7d664afdcc6"
	I0920 18:31:00.709307  506753 cri.go:89] found id: "199b68be37f6dec0c3e145251b4cf0f7dc80451e8df9799cc519d5f26245c333"
	I0920 18:31:00.709340  506753 cri.go:89] found id: ""
	I0920 18:31:00.709412  506753 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0920 18:31:00.722839  506753 cri.go:116] JSON = null
	W0920 18:31:00.722942  506753 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0920 18:31:00.723064  506753 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:31:00.732964  506753 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 18:31:00.733027  506753 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 18:31:00.733112  506753 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 18:31:00.745658  506753 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:31:00.746093  506753 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-475170" does not appear in /home/jenkins/minikube-integration/19672-294290/kubeconfig
	I0920 18:31:00.746230  506753 kubeconfig.go:62] /home/jenkins/minikube-integration/19672-294290/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-475170" cluster setting kubeconfig missing "old-k8s-version-475170" context setting]
	I0920 18:31:00.746516  506753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-294290/kubeconfig: {Name:mk99ef3647d0cf66fbb7a624c924e5cee2350dae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:31:00.747909  506753 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 18:31:00.757254  506753 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0920 18:31:00.757290  506753 kubeadm.go:597] duration metric: took 24.243412ms to restartPrimaryControlPlane
	I0920 18:31:00.757299  506753 kubeadm.go:394] duration metric: took 109.953708ms to StartCluster
	I0920 18:31:00.757316  506753 settings.go:142] acquiring lock: {Name:mk4f88389204d2653ab82e878e61c50b8437ae37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:31:00.757392  506753 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-294290/kubeconfig
	I0920 18:31:00.758042  506753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-294290/kubeconfig: {Name:mk99ef3647d0cf66fbb7a624c924e5cee2350dae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:31:00.758258  506753 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0920 18:31:00.758578  506753 config.go:182] Loaded profile config "old-k8s-version-475170": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0920 18:31:00.758630  506753 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 18:31:00.758701  506753 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-475170"
	I0920 18:31:00.758720  506753 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-475170"
	W0920 18:31:00.758731  506753 addons.go:243] addon storage-provisioner should already be in state true
	I0920 18:31:00.758755  506753 host.go:66] Checking if "old-k8s-version-475170" exists ...
	I0920 18:31:00.758765  506753 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-475170"
	I0920 18:31:00.758784  506753 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-475170"
	I0920 18:31:00.759131  506753 cli_runner.go:164] Run: docker container inspect old-k8s-version-475170 --format={{.State.Status}}
	I0920 18:31:00.759272  506753 cli_runner.go:164] Run: docker container inspect old-k8s-version-475170 --format={{.State.Status}}
	I0920 18:31:00.761417  506753 addons.go:69] Setting dashboard=true in profile "old-k8s-version-475170"
	I0920 18:31:00.761455  506753 addons.go:234] Setting addon dashboard=true in "old-k8s-version-475170"
	W0920 18:31:00.761464  506753 addons.go:243] addon dashboard should already be in state true
	I0920 18:31:00.761495  506753 host.go:66] Checking if "old-k8s-version-475170" exists ...
	I0920 18:31:00.762003  506753 cli_runner.go:164] Run: docker container inspect old-k8s-version-475170 --format={{.State.Status}}
	I0920 18:31:00.762206  506753 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-475170"
	I0920 18:31:00.762223  506753 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-475170"
	W0920 18:31:00.762230  506753 addons.go:243] addon metrics-server should already be in state true
	I0920 18:31:00.762255  506753 host.go:66] Checking if "old-k8s-version-475170" exists ...
	I0920 18:31:00.762671  506753 cli_runner.go:164] Run: docker container inspect old-k8s-version-475170 --format={{.State.Status}}
	I0920 18:31:00.764259  506753 out.go:177] * Verifying Kubernetes components...
	I0920 18:31:00.770743  506753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:31:00.804649  506753 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-475170"
	W0920 18:31:00.804671  506753 addons.go:243] addon default-storageclass should already be in state true
	I0920 18:31:00.804696  506753 host.go:66] Checking if "old-k8s-version-475170" exists ...
	I0920 18:31:00.805128  506753 cli_runner.go:164] Run: docker container inspect old-k8s-version-475170 --format={{.State.Status}}
	I0920 18:31:00.809631  506753 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 18:31:00.809742  506753 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:31:00.812478  506753 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:31:00.812505  506753 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:31:00.812569  506753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-475170
	I0920 18:31:00.812707  506753 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:31:00.812714  506753 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:31:00.812747  506753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-475170
	I0920 18:31:00.838581  506753 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0920 18:31:00.847820  506753 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0920 18:31:00.852485  506753 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0920 18:31:00.852511  506753 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0920 18:31:00.852602  506753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-475170
	I0920 18:31:00.858030  506753 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:31:00.858054  506753 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:31:00.858114  506753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-475170
	I0920 18:31:00.878455  506753 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/old-k8s-version-475170/id_rsa Username:docker}
	I0920 18:31:00.887246  506753 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/old-k8s-version-475170/id_rsa Username:docker}
	I0920 18:31:00.911808  506753 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/old-k8s-version-475170/id_rsa Username:docker}
	I0920 18:31:00.928844  506753 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/old-k8s-version-475170/id_rsa Username:docker}
	I0920 18:31:00.947875  506753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:31:00.978586  506753 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-475170" to be "Ready" ...
	I0920 18:31:01.029334  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:31:01.101102  506753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:31:01.101179  506753 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 18:31:01.101373  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:31:01.125773  506753 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0920 18:31:01.125809  506753 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0920 18:31:01.186688  506753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:31:01.186715  506753 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:31:01.220643  506753 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0920 18:31:01.220677  506753 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0920 18:31:01.236823  506753 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:31:01.236849  506753 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0920 18:31:01.275356  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:01.275397  506753 retry.go:31] will retry after 258.416202ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:01.293504  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:31:01.295523  506753 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0920 18:31:01.295550  506753 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0920 18:31:01.327931  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:01.327966  506753 retry.go:31] will retry after 344.165227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:01.346241  506753 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0920 18:31:01.346266  506753 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0920 18:31:01.371577  506753 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0920 18:31:01.371659  506753 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0920 18:31:01.396448  506753 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0920 18:31:01.396528  506753 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0920 18:31:01.397467  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:01.397536  506753 retry.go:31] will retry after 282.36717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:01.416494  506753 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0920 18:31:01.416517  506753 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0920 18:31:01.434996  506753 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0920 18:31:01.435047  506753 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0920 18:31:01.459926  506753 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0920 18:31:01.460009  506753 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0920 18:31:01.480734  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0920 18:31:01.535045  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0920 18:31:01.564517  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:01.564547  506753 retry.go:31] will retry after 258.348078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 18:31:01.650631  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:01.650665  506753 retry.go:31] will retry after 526.60715ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:01.672945  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:31:01.680586  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0920 18:31:01.783272  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:01.783308  506753 retry.go:31] will retry after 473.069614ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 18:31:01.786445  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:01.786475  506753 retry.go:31] will retry after 290.94863ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:01.823594  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0920 18:31:01.901434  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:01.901466  506753 retry.go:31] will retry after 230.244128ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:02.078212  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:31:02.132703  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0920 18:31:02.178314  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0920 18:31:02.190482  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:02.190527  506753 retry.go:31] will retry after 630.767138ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:02.256927  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0920 18:31:02.308375  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:02.308423  506753 retry.go:31] will retry after 768.298604ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 18:31:02.308581  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:02.308602  506753 retry.go:31] will retry after 835.709788ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 18:31:02.373959  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:02.373992  506753 retry.go:31] will retry after 673.077489ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:02.822229  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0920 18:31:02.911038  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:02.911075  506753 retry.go:31] will retry after 595.46633ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:02.979722  506753 node_ready.go:53] error getting node "old-k8s-version-475170": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-475170": dial tcp 192.168.85.2:8443: connect: connection refused
	I0920 18:31:03.048014  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:31:03.077553  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:31:03.144605  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0920 18:31:03.213432  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:03.213521  506753 retry.go:31] will retry after 519.455195ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 18:31:03.289947  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:03.290027  506753 retry.go:31] will retry after 846.810387ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 18:31:03.320179  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:03.320272  506753 retry.go:31] will retry after 1.18576754s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:03.507250  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0920 18:31:03.597081  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:03.597160  506753 retry.go:31] will retry after 1.359347738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:03.733346  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0920 18:31:03.887280  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:03.887312  506753 retry.go:31] will retry after 1.496068106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:04.137050  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0920 18:31:04.294552  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:04.294579  506753 retry.go:31] will retry after 1.662593802s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:04.506383  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0920 18:31:04.648609  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:04.648637  506753 retry.go:31] will retry after 1.173685573s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:04.957118  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:31:04.979796  506753 node_ready.go:53] error getting node "old-k8s-version-475170": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-475170": dial tcp 192.168.85.2:8443: connect: connection refused
	W0920 18:31:05.069161  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:05.069191  506753 retry.go:31] will retry after 2.35408638s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:05.387641  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0920 18:31:05.530812  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:05.530842  506753 retry.go:31] will retry after 2.183743278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:05.822495  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0920 18:31:05.934935  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:05.934966  506753 retry.go:31] will retry after 1.826104256s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:05.958308  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0920 18:31:06.071290  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:06.071321  506753 retry.go:31] will retry after 2.174890133s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:06.980025  506753 node_ready.go:53] error getting node "old-k8s-version-475170": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-475170": dial tcp 192.168.85.2:8443: connect: connection refused
	I0920 18:31:07.423498  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0920 18:31:07.503236  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:07.503272  506753 retry.go:31] will retry after 2.029562665s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:07.715646  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:31:07.762048  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0920 18:31:07.799717  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:07.799755  506753 retry.go:31] will retry after 3.17503337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 18:31:07.844714  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:07.844754  506753 retry.go:31] will retry after 4.268590937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:08.246921  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0920 18:31:08.320208  506753 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:08.320282  506753 retry.go:31] will retry after 1.657338397s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 18:31:09.479126  506753 node_ready.go:53] error getting node "old-k8s-version-475170": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-475170": dial tcp 192.168.85.2:8443: connect: connection refused
	I0920 18:31:09.533506  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:31:09.978348  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:31:10.975498  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:31:12.114258  506753 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0920 18:31:17.461202  506753 node_ready.go:49] node "old-k8s-version-475170" has status "Ready":"True"
	I0920 18:31:17.461232  506753 node_ready.go:38] duration metric: took 16.482553949s for node "old-k8s-version-475170" to be "Ready" ...
	I0920 18:31:17.461243  506753 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:31:17.684394  506753 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-hc5pl" in "kube-system" namespace to be "Ready" ...
	I0920 18:31:17.704908  506753 pod_ready.go:93] pod "coredns-74ff55c5b-hc5pl" in "kube-system" namespace has status "Ready":"True"
	I0920 18:31:17.704936  506753 pod_ready.go:82] duration metric: took 20.502314ms for pod "coredns-74ff55c5b-hc5pl" in "kube-system" namespace to be "Ready" ...
	I0920 18:31:17.704947  506753 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-475170" in "kube-system" namespace to be "Ready" ...
	I0920 18:31:17.744501  506753 pod_ready.go:93] pod "etcd-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"True"
	I0920 18:31:17.744528  506753 pod_ready.go:82] duration metric: took 39.573042ms for pod "etcd-old-k8s-version-475170" in "kube-system" namespace to be "Ready" ...
	I0920 18:31:17.744556  506753 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-475170" in "kube-system" namespace to be "Ready" ...
	I0920 18:31:17.795228  506753 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"True"
	I0920 18:31:17.795256  506753 pod_ready.go:82] duration metric: took 50.690191ms for pod "kube-apiserver-old-k8s-version-475170" in "kube-system" namespace to be "Ready" ...
	I0920 18:31:17.795267  506753 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace to be "Ready" ...
	I0920 18:31:18.997950  506753 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.464402201s)
	I0920 18:31:18.998028  506753 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-475170"
	I0920 18:31:18.998132  506753 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.019761988s)
	I0920 18:31:18.998189  506753 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.022668559s)
	I0920 18:31:19.049655  506753 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.935350649s)
	I0920 18:31:19.051877  506753 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-475170 addons enable metrics-server
	
	I0920 18:31:19.053916  506753 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0920 18:31:19.055892  506753 addons.go:510] duration metric: took 18.297256409s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0920 18:31:19.802294  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:31:22.302239  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:31:24.302952  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:31:26.303444  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:31:28.802371  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:31:30.802727  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:31:32.804542  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:31:34.810951  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:31:37.310553  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:31:39.803051  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:31:42.304377  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:31:44.806526  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:31:47.356214  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:31:49.806536  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:31:52.315686  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:31:54.802651  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:31:56.802860  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:31:58.802903  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:00.803110  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:03.302604  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:05.302831  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:07.305214  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:09.802298  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:12.303231  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:14.802148  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:17.301946  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:19.802021  506753 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:20.802214  506753 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"True"
	I0920 18:32:20.802242  506753 pod_ready.go:82] duration metric: took 1m3.006966245s for pod "kube-controller-manager-old-k8s-version-475170" in "kube-system" namespace to be "Ready" ...
	I0920 18:32:20.802255  506753 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r9xl5" in "kube-system" namespace to be "Ready" ...
	I0920 18:32:20.807464  506753 pod_ready.go:93] pod "kube-proxy-r9xl5" in "kube-system" namespace has status "Ready":"True"
	I0920 18:32:20.807489  506753 pod_ready.go:82] duration metric: took 5.225878ms for pod "kube-proxy-r9xl5" in "kube-system" namespace to be "Ready" ...
	I0920 18:32:20.807500  506753 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-475170" in "kube-system" namespace to be "Ready" ...
	I0920 18:32:22.814032  506753 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:25.315114  506753 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:27.813306  506753 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:29.813473  506753 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:31.813543  506753 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:33.814050  506753 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:36.313538  506753 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:38.313743  506753 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:38.813525  506753 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-475170" in "kube-system" namespace has status "Ready":"True"
	I0920 18:32:38.813553  506753 pod_ready.go:82] duration metric: took 18.006044047s for pod "kube-scheduler-old-k8s-version-475170" in "kube-system" namespace to be "Ready" ...
	I0920 18:32:38.813565  506753 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace to be "Ready" ...
	I0920 18:32:40.819506  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:43.319864  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:45.321750  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:47.867234  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:50.319745  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:52.319872  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:54.320288  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:56.320680  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:32:58.820130  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:01.319748  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:03.320750  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:05.321063  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:07.820361  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:10.321874  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:12.819437  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:14.820133  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:16.820784  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:19.320188  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:21.820227  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:24.319958  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:26.820004  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:29.319982  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:31.320342  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:33.326905  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:35.819613  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:37.819666  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:39.819844  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:41.819915  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:44.320464  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:46.819873  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:49.318903  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:51.320059  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:53.820409  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:56.320245  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:33:58.320520  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:00.343519  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:02.821066  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:05.320902  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:07.819491  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:09.820393  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:11.820929  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:14.320900  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:16.819223  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:19.320556  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:21.819969  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:23.820120  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:26.320467  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:28.818836  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:30.820033  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:32.825103  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:35.321052  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:37.819298  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:39.819953  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:41.820175  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:44.320020  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:46.820245  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:49.319476  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:51.818900  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:53.819357  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:55.819439  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:57.819976  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:34:59.820926  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:02.320643  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:04.820371  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:06.820538  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:09.320051  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:11.820519  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:13.829147  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:16.320987  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:18.819404  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:20.820238  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:23.319368  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:25.319483  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:27.320931  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:29.320970  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:31.820049  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:33.820719  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:36.320283  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:38.819888  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:40.820197  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:43.320477  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:45.322459  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:47.819567  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:49.819641  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:52.320514  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:54.320566  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:56.819702  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:35:58.820404  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:00.820667  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:03.319887  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:05.320011  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:07.820749  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:09.821112  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:11.827868  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:14.320776  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:16.820357  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:19.320321  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:21.819589  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:23.828014  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:26.320605  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:28.819974  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:30.820572  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:33.321325  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:35.820011  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:37.820711  506753 pod_ready.go:103] pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace has status "Ready":"False"
	I0920 18:36:38.819846  506753 pod_ready.go:82] duration metric: took 4m0.006265343s for pod "metrics-server-9975d5f86-2scnf" in "kube-system" namespace to be "Ready" ...
	E0920 18:36:38.819874  506753 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 18:36:38.819885  506753 pod_ready.go:39] duration metric: took 5m21.358631983s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:36:38.819901  506753 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:36:38.819931  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:36:38.819997  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:36:38.866542  506753 cri.go:89] found id: "819a0e2152e9284aada7d30f67dd8a064d3cd2804d41229ca0e7af02aa77f0c0"
	I0920 18:36:38.866582  506753 cri.go:89] found id: "e9fb3ff5b999ec2b982365086ff02d308b31e23e032a6a83b361204f78b0edf6"
	I0920 18:36:38.866587  506753 cri.go:89] found id: ""
	I0920 18:36:38.866595  506753 logs.go:276] 2 containers: [819a0e2152e9284aada7d30f67dd8a064d3cd2804d41229ca0e7af02aa77f0c0 e9fb3ff5b999ec2b982365086ff02d308b31e23e032a6a83b361204f78b0edf6]
	I0920 18:36:38.866675  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:38.870264  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:38.873657  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0920 18:36:38.873743  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:36:38.916187  506753 cri.go:89] found id: "b41a67962b9045e22b2075573e3f11825adf11fb40f066ced5d6a9c9b4586c99"
	I0920 18:36:38.916213  506753 cri.go:89] found id: "233df6b6eb986baede4db039e732ee1858dd85728e913ffd728598b107989bfd"
	I0920 18:36:38.916218  506753 cri.go:89] found id: ""
	I0920 18:36:38.916225  506753 logs.go:276] 2 containers: [b41a67962b9045e22b2075573e3f11825adf11fb40f066ced5d6a9c9b4586c99 233df6b6eb986baede4db039e732ee1858dd85728e913ffd728598b107989bfd]
	I0920 18:36:38.916288  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:38.919988  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:38.923309  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0920 18:36:38.923387  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:36:38.966874  506753 cri.go:89] found id: "bba0d98fd29865d157261c8482ac52d89e6ffc1f1e5b727ff5d3d5262a37fd32"
	I0920 18:36:38.966896  506753 cri.go:89] found id: "3e5b0a7d9d673d78cc6bc4fb42b865a56a854a554083c93d5469d52ce1613872"
	I0920 18:36:38.966903  506753 cri.go:89] found id: ""
	I0920 18:36:38.966910  506753 logs.go:276] 2 containers: [bba0d98fd29865d157261c8482ac52d89e6ffc1f1e5b727ff5d3d5262a37fd32 3e5b0a7d9d673d78cc6bc4fb42b865a56a854a554083c93d5469d52ce1613872]
	I0920 18:36:38.966965  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:38.970540  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:38.974077  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:36:38.974157  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:36:39.018766  506753 cri.go:89] found id: "186afb87158e29ba5b33e79e9d46317f8bf2277b41c9c1948bb4fe7bd52cb5ca"
	I0920 18:36:39.018791  506753 cri.go:89] found id: "199b68be37f6dec0c3e145251b4cf0f7dc80451e8df9799cc519d5f26245c333"
	I0920 18:36:39.018797  506753 cri.go:89] found id: ""
	I0920 18:36:39.018806  506753 logs.go:276] 2 containers: [186afb87158e29ba5b33e79e9d46317f8bf2277b41c9c1948bb4fe7bd52cb5ca 199b68be37f6dec0c3e145251b4cf0f7dc80451e8df9799cc519d5f26245c333]
	I0920 18:36:39.018868  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:39.023622  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:39.027638  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:36:39.027713  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:36:39.071854  506753 cri.go:89] found id: "101769e8441e423925facf5f93161549531fb8ea432669373cb403b040be6102"
	I0920 18:36:39.071874  506753 cri.go:89] found id: "93d93c2e410820672b1234f1012c02eadac7b65e873d66423584e1bf387e045e"
	I0920 18:36:39.071879  506753 cri.go:89] found id: ""
	I0920 18:36:39.071887  506753 logs.go:276] 2 containers: [101769e8441e423925facf5f93161549531fb8ea432669373cb403b040be6102 93d93c2e410820672b1234f1012c02eadac7b65e873d66423584e1bf387e045e]
	I0920 18:36:39.071948  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:39.076615  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:39.085832  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:36:39.085908  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:36:39.127144  506753 cri.go:89] found id: "233d68ae9f8fdfbccfa0227891e909d8315f56c07f1a81d70b9939806795f464"
	I0920 18:36:39.127175  506753 cri.go:89] found id: "3537612de5ba307587c7b835c7e961d7204cd3b56b96f9297584d7d664afdcc6"
	I0920 18:36:39.127180  506753 cri.go:89] found id: ""
	I0920 18:36:39.127187  506753 logs.go:276] 2 containers: [233d68ae9f8fdfbccfa0227891e909d8315f56c07f1a81d70b9939806795f464 3537612de5ba307587c7b835c7e961d7204cd3b56b96f9297584d7d664afdcc6]
	I0920 18:36:39.127260  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:39.131753  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:39.136051  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0920 18:36:39.136155  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:36:39.174474  506753 cri.go:89] found id: "fdda1626f0e17b465e141e79489b20add7508ca46cf9cd2f835cf37bb6674225"
	I0920 18:36:39.174535  506753 cri.go:89] found id: "8a4ee0253af3725554a381aa7b6a8f89dbafb548f3de5993565a3866b25f01fc"
	I0920 18:36:39.174553  506753 cri.go:89] found id: ""
	I0920 18:36:39.174577  506753 logs.go:276] 2 containers: [fdda1626f0e17b465e141e79489b20add7508ca46cf9cd2f835cf37bb6674225 8a4ee0253af3725554a381aa7b6a8f89dbafb548f3de5993565a3866b25f01fc]
	I0920 18:36:39.174650  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:39.178577  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:39.182429  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:36:39.182598  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:36:39.221837  506753 cri.go:89] found id: "b12b0d4c8fc78ca871279a8f41127290f53d614f970d7916b20ce43b14066509"
	I0920 18:36:39.221861  506753 cri.go:89] found id: ""
	I0920 18:36:39.221870  506753 logs.go:276] 1 containers: [b12b0d4c8fc78ca871279a8f41127290f53d614f970d7916b20ce43b14066509]
	I0920 18:36:39.221934  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:39.226108  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:36:39.226200  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:36:39.268628  506753 cri.go:89] found id: "9bc5c63f723351b1d6ef80d461357d2aa2f2e9617925dad553809fae62b51f84"
	I0920 18:36:39.268693  506753 cri.go:89] found id: "e126978f873c21258f64888c111ebc9d582eb0398c12b5d85458a1909b5cbe76"
	I0920 18:36:39.268712  506753 cri.go:89] found id: ""
	I0920 18:36:39.268736  506753 logs.go:276] 2 containers: [9bc5c63f723351b1d6ef80d461357d2aa2f2e9617925dad553809fae62b51f84 e126978f873c21258f64888c111ebc9d582eb0398c12b5d85458a1909b5cbe76]
	I0920 18:36:39.268829  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:39.272499  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:39.277382  506753 logs.go:123] Gathering logs for kube-proxy [101769e8441e423925facf5f93161549531fb8ea432669373cb403b040be6102] ...
	I0920 18:36:39.277420  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101769e8441e423925facf5f93161549531fb8ea432669373cb403b040be6102"
	I0920 18:36:39.327652  506753 logs.go:123] Gathering logs for storage-provisioner [e126978f873c21258f64888c111ebc9d582eb0398c12b5d85458a1909b5cbe76] ...
	I0920 18:36:39.327681  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e126978f873c21258f64888c111ebc9d582eb0398c12b5d85458a1909b5cbe76"
	I0920 18:36:39.370138  506753 logs.go:123] Gathering logs for containerd ...
	I0920 18:36:39.370174  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0920 18:36:39.438364  506753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:36:39.438410  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:36:39.459684  506753 logs.go:123] Gathering logs for kube-apiserver [819a0e2152e9284aada7d30f67dd8a064d3cd2804d41229ca0e7af02aa77f0c0] ...
	I0920 18:36:39.459716  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 819a0e2152e9284aada7d30f67dd8a064d3cd2804d41229ca0e7af02aa77f0c0"
	I0920 18:36:39.544267  506753 logs.go:123] Gathering logs for coredns [3e5b0a7d9d673d78cc6bc4fb42b865a56a854a554083c93d5469d52ce1613872] ...
	I0920 18:36:39.544306  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e5b0a7d9d673d78cc6bc4fb42b865a56a854a554083c93d5469d52ce1613872"
	I0920 18:36:39.586502  506753 logs.go:123] Gathering logs for kube-scheduler [199b68be37f6dec0c3e145251b4cf0f7dc80451e8df9799cc519d5f26245c333] ...
	I0920 18:36:39.586532  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 199b68be37f6dec0c3e145251b4cf0f7dc80451e8df9799cc519d5f26245c333"
	I0920 18:36:39.658603  506753 logs.go:123] Gathering logs for kube-controller-manager [3537612de5ba307587c7b835c7e961d7204cd3b56b96f9297584d7d664afdcc6] ...
	I0920 18:36:39.658639  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3537612de5ba307587c7b835c7e961d7204cd3b56b96f9297584d7d664afdcc6"
	I0920 18:36:39.730370  506753 logs.go:123] Gathering logs for kindnet [fdda1626f0e17b465e141e79489b20add7508ca46cf9cd2f835cf37bb6674225] ...
	I0920 18:36:39.730410  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdda1626f0e17b465e141e79489b20add7508ca46cf9cd2f835cf37bb6674225"
	I0920 18:36:39.788229  506753 logs.go:123] Gathering logs for kindnet [8a4ee0253af3725554a381aa7b6a8f89dbafb548f3de5993565a3866b25f01fc] ...
	I0920 18:36:39.788261  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a4ee0253af3725554a381aa7b6a8f89dbafb548f3de5993565a3866b25f01fc"
	I0920 18:36:39.833400  506753 logs.go:123] Gathering logs for container status ...
	I0920 18:36:39.833433  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:36:39.914353  506753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:36:39.914386  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:36:40.099453  506753 logs.go:123] Gathering logs for etcd [b41a67962b9045e22b2075573e3f11825adf11fb40f066ced5d6a9c9b4586c99] ...
	I0920 18:36:40.099494  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b41a67962b9045e22b2075573e3f11825adf11fb40f066ced5d6a9c9b4586c99"
	I0920 18:36:40.147147  506753 logs.go:123] Gathering logs for etcd [233df6b6eb986baede4db039e732ee1858dd85728e913ffd728598b107989bfd] ...
	I0920 18:36:40.147178  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 233df6b6eb986baede4db039e732ee1858dd85728e913ffd728598b107989bfd"
	I0920 18:36:40.199920  506753 logs.go:123] Gathering logs for kube-apiserver [e9fb3ff5b999ec2b982365086ff02d308b31e23e032a6a83b361204f78b0edf6] ...
	I0920 18:36:40.199954  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9fb3ff5b999ec2b982365086ff02d308b31e23e032a6a83b361204f78b0edf6"
	I0920 18:36:40.256178  506753 logs.go:123] Gathering logs for kubernetes-dashboard [b12b0d4c8fc78ca871279a8f41127290f53d614f970d7916b20ce43b14066509] ...
	I0920 18:36:40.256216  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b12b0d4c8fc78ca871279a8f41127290f53d614f970d7916b20ce43b14066509"
	I0920 18:36:40.300277  506753 logs.go:123] Gathering logs for storage-provisioner [9bc5c63f723351b1d6ef80d461357d2aa2f2e9617925dad553809fae62b51f84] ...
	I0920 18:36:40.300310  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bc5c63f723351b1d6ef80d461357d2aa2f2e9617925dad553809fae62b51f84"
	I0920 18:36:40.348014  506753 logs.go:123] Gathering logs for kube-proxy [93d93c2e410820672b1234f1012c02eadac7b65e873d66423584e1bf387e045e] ...
	I0920 18:36:40.348046  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93d93c2e410820672b1234f1012c02eadac7b65e873d66423584e1bf387e045e"
	I0920 18:36:40.390793  506753 logs.go:123] Gathering logs for kube-controller-manager [233d68ae9f8fdfbccfa0227891e909d8315f56c07f1a81d70b9939806795f464] ...
	I0920 18:36:40.390821  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 233d68ae9f8fdfbccfa0227891e909d8315f56c07f1a81d70b9939806795f464"
	I0920 18:36:40.461822  506753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:36:40.461861  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 18:36:40.512597  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:17 old-k8s-version-475170 kubelet[662]: E0920 18:31:17.390835     662 reflector.go:138] object-"kube-system"/"kindnet-token-t5zmp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-t5zmp" is forbidden: User "system:node:old-k8s-version-475170" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-475170' and this object
	W0920 18:36:40.512833  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:17 old-k8s-version-475170 kubelet[662]: E0920 18:31:17.391083     662 reflector.go:138] object-"kube-system"/"coredns-token-4tb5x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-4tb5x" is forbidden: User "system:node:old-k8s-version-475170" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-475170' and this object
	W0920 18:36:40.513043  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:17 old-k8s-version-475170 kubelet[662]: E0920 18:31:17.391195     662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-475170" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-475170' and this object
	W0920 18:36:40.513273  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:17 old-k8s-version-475170 kubelet[662]: E0920 18:31:17.391282     662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-7wm24": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-7wm24" is forbidden: User "system:node:old-k8s-version-475170" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-475170' and this object
	W0920 18:36:40.513497  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:17 old-k8s-version-475170 kubelet[662]: E0920 18:31:17.391349     662 reflector.go:138] object-"kube-system"/"metrics-server-token-ftv7g": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-ftv7g" is forbidden: User "system:node:old-k8s-version-475170" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-475170' and this object
	W0920 18:36:40.513699  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:17 old-k8s-version-475170 kubelet[662]: E0920 18:31:17.391412     662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-475170" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-475170' and this object
	W0920 18:36:40.513909  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:17 old-k8s-version-475170 kubelet[662]: E0920 18:31:17.391421     662 reflector.go:138] object-"default"/"default-token-pflvr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pflvr" is forbidden: User "system:node:old-k8s-version-475170" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-475170' and this object
	W0920 18:36:40.514126  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:17 old-k8s-version-475170 kubelet[662]: E0920 18:31:17.391476     662 reflector.go:138] object-"kube-system"/"kube-proxy-token-s4d9s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-s4d9s" is forbidden: User "system:node:old-k8s-version-475170" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-475170' and this object
	W0920 18:36:40.522762  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:21 old-k8s-version-475170 kubelet[662]: E0920 18:31:21.571773     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0920 18:36:40.522953  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:21 old-k8s-version-475170 kubelet[662]: E0920 18:31:21.781672     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.527348  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:43 old-k8s-version-475170 kubelet[662]: E0920 18:31:43.014693     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0920 18:36:40.527977  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:44 old-k8s-version-475170 kubelet[662]: E0920 18:31:44.859444     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.528308  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:45 old-k8s-version-475170 kubelet[662]: E0920 18:31:45.862927     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.528643  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:47 old-k8s-version-475170 kubelet[662]: E0920 18:31:47.458620     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.529169  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:56 old-k8s-version-475170 kubelet[662]: E0920 18:31:56.563261     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.530117  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:02 old-k8s-version-475170 kubelet[662]: E0920 18:32:02.910262     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.530449  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:07 old-k8s-version-475170 kubelet[662]: E0920 18:32:07.458841     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.532889  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:07 old-k8s-version-475170 kubelet[662]: E0920 18:32:07.571677     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0920 18:36:40.533216  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:20 old-k8s-version-475170 kubelet[662]: E0920 18:32:20.564228     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.533414  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:21 old-k8s-version-475170 kubelet[662]: E0920 18:32:21.560781     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.534011  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:35 old-k8s-version-475170 kubelet[662]: E0920 18:32:35.002687     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.534195  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:36 old-k8s-version-475170 kubelet[662]: E0920 18:32:36.562114     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.534524  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:37 old-k8s-version-475170 kubelet[662]: E0920 18:32:37.458573     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.534854  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:47 old-k8s-version-475170 kubelet[662]: E0920 18:32:47.560274     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.537289  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:50 old-k8s-version-475170 kubelet[662]: E0920 18:32:50.573726     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0920 18:36:40.537618  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:59 old-k8s-version-475170 kubelet[662]: E0920 18:32:59.560273     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.537805  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:01 old-k8s-version-475170 kubelet[662]: E0920 18:33:01.561247     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.538132  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:13 old-k8s-version-475170 kubelet[662]: E0920 18:33:13.560334     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.538315  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:16 old-k8s-version-475170 kubelet[662]: E0920 18:33:16.560810     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.538914  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:29 old-k8s-version-475170 kubelet[662]: E0920 18:33:29.189104     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.539144  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:29 old-k8s-version-475170 kubelet[662]: E0920 18:33:29.560759     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.539474  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:37 old-k8s-version-475170 kubelet[662]: E0920 18:33:37.458800     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.539664  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:41 old-k8s-version-475170 kubelet[662]: E0920 18:33:41.560924     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.539993  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:49 old-k8s-version-475170 kubelet[662]: E0920 18:33:49.560399     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.540183  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:56 old-k8s-version-475170 kubelet[662]: E0920 18:33:56.560918     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.540509  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:01 old-k8s-version-475170 kubelet[662]: E0920 18:34:01.560297     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.540693  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:08 old-k8s-version-475170 kubelet[662]: E0920 18:34:08.565922     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.541020  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:13 old-k8s-version-475170 kubelet[662]: E0920 18:34:13.560350     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.543460  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:19 old-k8s-version-475170 kubelet[662]: E0920 18:34:19.569001     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0920 18:36:40.543796  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:24 old-k8s-version-475170 kubelet[662]: E0920 18:34:24.564378     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.543985  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:30 old-k8s-version-475170 kubelet[662]: E0920 18:34:30.570033     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.544316  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:38 old-k8s-version-475170 kubelet[662]: E0920 18:34:38.560886     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.544502  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:41 old-k8s-version-475170 kubelet[662]: E0920 18:34:41.568748     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.545093  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:53 old-k8s-version-475170 kubelet[662]: E0920 18:34:53.437729     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.545277  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:56 old-k8s-version-475170 kubelet[662]: E0920 18:34:56.560875     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.545602  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:57 old-k8s-version-475170 kubelet[662]: E0920 18:34:57.458858     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.545927  506753 logs.go:138] Found kubelet problem: Sep 20 18:35:09 old-k8s-version-475170 kubelet[662]: E0920 18:35:09.560326     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.546111  506753 logs.go:138] Found kubelet problem: Sep 20 18:35:09 old-k8s-version-475170 kubelet[662]: E0920 18:35:09.561513     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.546295  506753 logs.go:138] Found kubelet problem: Sep 20 18:35:21 old-k8s-version-475170 kubelet[662]: E0920 18:35:21.560735     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.546621  506753 logs.go:138] Found kubelet problem: Sep 20 18:35:22 old-k8s-version-475170 kubelet[662]: E0920 18:35:22.560289     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.546804  506753 logs.go:138] Found kubelet problem: Sep 20 18:35:34 old-k8s-version-475170 kubelet[662]: E0920 18:35:34.560606     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.547142  506753 logs.go:138] Found kubelet problem: Sep 20 18:35:37 old-k8s-version-475170 kubelet[662]: E0920 18:35:37.560351     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.547327  506753 logs.go:138] Found kubelet problem: Sep 20 18:35:47 old-k8s-version-475170 kubelet[662]: E0920 18:35:47.560695     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.547657  506753 logs.go:138] Found kubelet problem: Sep 20 18:35:51 old-k8s-version-475170 kubelet[662]: E0920 18:35:51.560316     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.547844  506753 logs.go:138] Found kubelet problem: Sep 20 18:36:00 old-k8s-version-475170 kubelet[662]: E0920 18:36:00.571086     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.548172  506753 logs.go:138] Found kubelet problem: Sep 20 18:36:06 old-k8s-version-475170 kubelet[662]: E0920 18:36:06.560426     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.548358  506753 logs.go:138] Found kubelet problem: Sep 20 18:36:15 old-k8s-version-475170 kubelet[662]: E0920 18:36:15.560776     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.548683  506753 logs.go:138] Found kubelet problem: Sep 20 18:36:20 old-k8s-version-475170 kubelet[662]: E0920 18:36:20.565096     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.548868  506753 logs.go:138] Found kubelet problem: Sep 20 18:36:28 old-k8s-version-475170 kubelet[662]: E0920 18:36:28.561796     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.549202  506753 logs.go:138] Found kubelet problem: Sep 20 18:36:31 old-k8s-version-475170 kubelet[662]: E0920 18:36:31.560613     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.549392  506753 logs.go:138] Found kubelet problem: Sep 20 18:36:39 old-k8s-version-475170 kubelet[662]: E0920 18:36:39.560758     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0920 18:36:40.549403  506753 logs.go:123] Gathering logs for coredns [bba0d98fd29865d157261c8482ac52d89e6ffc1f1e5b727ff5d3d5262a37fd32] ...
	I0920 18:36:40.549418  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bba0d98fd29865d157261c8482ac52d89e6ffc1f1e5b727ff5d3d5262a37fd32"
	I0920 18:36:40.596117  506753 logs.go:123] Gathering logs for kube-scheduler [186afb87158e29ba5b33e79e9d46317f8bf2277b41c9c1948bb4fe7bd52cb5ca] ...
	I0920 18:36:40.596147  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 186afb87158e29ba5b33e79e9d46317f8bf2277b41c9c1948bb4fe7bd52cb5ca"
	I0920 18:36:40.642804  506753 out.go:358] Setting ErrFile to fd 2...
	I0920 18:36:40.642832  506753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 18:36:40.642899  506753 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 18:36:40.642912  506753 out.go:270]   Sep 20 18:36:15 old-k8s-version-475170 kubelet[662]: E0920 18:36:15.560776     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 20 18:36:15 old-k8s-version-475170 kubelet[662]: E0920 18:36:15.560776     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.642923  506753 out.go:270]   Sep 20 18:36:20 old-k8s-version-475170 kubelet[662]: E0920 18:36:20.565096     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	  Sep 20 18:36:20 old-k8s-version-475170 kubelet[662]: E0920 18:36:20.565096     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.642956  506753 out.go:270]   Sep 20 18:36:28 old-k8s-version-475170 kubelet[662]: E0920 18:36:28.561796     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 20 18:36:28 old-k8s-version-475170 kubelet[662]: E0920 18:36:28.561796     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:40.642969  506753 out.go:270]   Sep 20 18:36:31 old-k8s-version-475170 kubelet[662]: E0920 18:36:31.560613     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	  Sep 20 18:36:31 old-k8s-version-475170 kubelet[662]: E0920 18:36:31.560613     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:40.642983  506753 out.go:270]   Sep 20 18:36:39 old-k8s-version-475170 kubelet[662]: E0920 18:36:39.560758     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 20 18:36:39 old-k8s-version-475170 kubelet[662]: E0920 18:36:39.560758     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0920 18:36:40.642990  506753 out.go:358] Setting ErrFile to fd 2...
	I0920 18:36:40.643009  506753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:36:50.643999  506753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:36:50.663478  506753 api_server.go:72] duration metric: took 5m49.905179208s to wait for apiserver process to appear ...
	I0920 18:36:50.663505  506753 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:36:50.663542  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:36:50.663604  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:36:50.724871  506753 cri.go:89] found id: "819a0e2152e9284aada7d30f67dd8a064d3cd2804d41229ca0e7af02aa77f0c0"
	I0920 18:36:50.724890  506753 cri.go:89] found id: "e9fb3ff5b999ec2b982365086ff02d308b31e23e032a6a83b361204f78b0edf6"
	I0920 18:36:50.724894  506753 cri.go:89] found id: ""
	I0920 18:36:50.724902  506753 logs.go:276] 2 containers: [819a0e2152e9284aada7d30f67dd8a064d3cd2804d41229ca0e7af02aa77f0c0 e9fb3ff5b999ec2b982365086ff02d308b31e23e032a6a83b361204f78b0edf6]
	I0920 18:36:50.724956  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:50.730579  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:50.735327  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0920 18:36:50.735404  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:36:50.800388  506753 cri.go:89] found id: "b41a67962b9045e22b2075573e3f11825adf11fb40f066ced5d6a9c9b4586c99"
	I0920 18:36:50.800410  506753 cri.go:89] found id: "233df6b6eb986baede4db039e732ee1858dd85728e913ffd728598b107989bfd"
	I0920 18:36:50.800415  506753 cri.go:89] found id: ""
	I0920 18:36:50.800423  506753 logs.go:276] 2 containers: [b41a67962b9045e22b2075573e3f11825adf11fb40f066ced5d6a9c9b4586c99 233df6b6eb986baede4db039e732ee1858dd85728e913ffd728598b107989bfd]
	I0920 18:36:50.800479  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:50.804276  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:50.807916  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0920 18:36:50.807989  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:36:50.886003  506753 cri.go:89] found id: "bba0d98fd29865d157261c8482ac52d89e6ffc1f1e5b727ff5d3d5262a37fd32"
	I0920 18:36:50.886027  506753 cri.go:89] found id: "3e5b0a7d9d673d78cc6bc4fb42b865a56a854a554083c93d5469d52ce1613872"
	I0920 18:36:50.886033  506753 cri.go:89] found id: ""
	I0920 18:36:50.886041  506753 logs.go:276] 2 containers: [bba0d98fd29865d157261c8482ac52d89e6ffc1f1e5b727ff5d3d5262a37fd32 3e5b0a7d9d673d78cc6bc4fb42b865a56a854a554083c93d5469d52ce1613872]
	I0920 18:36:50.886124  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:50.894588  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:50.898446  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:36:50.898543  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:36:50.947683  506753 cri.go:89] found id: "186afb87158e29ba5b33e79e9d46317f8bf2277b41c9c1948bb4fe7bd52cb5ca"
	I0920 18:36:50.947706  506753 cri.go:89] found id: "199b68be37f6dec0c3e145251b4cf0f7dc80451e8df9799cc519d5f26245c333"
	I0920 18:36:50.947711  506753 cri.go:89] found id: ""
	I0920 18:36:50.947719  506753 logs.go:276] 2 containers: [186afb87158e29ba5b33e79e9d46317f8bf2277b41c9c1948bb4fe7bd52cb5ca 199b68be37f6dec0c3e145251b4cf0f7dc80451e8df9799cc519d5f26245c333]
	I0920 18:36:50.947802  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:50.951394  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:50.954774  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:36:50.954847  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:36:50.993047  506753 cri.go:89] found id: "101769e8441e423925facf5f93161549531fb8ea432669373cb403b040be6102"
	I0920 18:36:50.993069  506753 cri.go:89] found id: "93d93c2e410820672b1234f1012c02eadac7b65e873d66423584e1bf387e045e"
	I0920 18:36:50.993074  506753 cri.go:89] found id: ""
	I0920 18:36:50.993081  506753 logs.go:276] 2 containers: [101769e8441e423925facf5f93161549531fb8ea432669373cb403b040be6102 93d93c2e410820672b1234f1012c02eadac7b65e873d66423584e1bf387e045e]
	I0920 18:36:50.993139  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:50.996778  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:51.000391  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:36:51.000463  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:36:51.049636  506753 cri.go:89] found id: "233d68ae9f8fdfbccfa0227891e909d8315f56c07f1a81d70b9939806795f464"
	I0920 18:36:51.049658  506753 cri.go:89] found id: "3537612de5ba307587c7b835c7e961d7204cd3b56b96f9297584d7d664afdcc6"
	I0920 18:36:51.049663  506753 cri.go:89] found id: ""
	I0920 18:36:51.049670  506753 logs.go:276] 2 containers: [233d68ae9f8fdfbccfa0227891e909d8315f56c07f1a81d70b9939806795f464 3537612de5ba307587c7b835c7e961d7204cd3b56b96f9297584d7d664afdcc6]
	I0920 18:36:51.049726  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:51.053689  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:51.057386  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0920 18:36:51.057493  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:36:51.104092  506753 cri.go:89] found id: "fdda1626f0e17b465e141e79489b20add7508ca46cf9cd2f835cf37bb6674225"
	I0920 18:36:51.104128  506753 cri.go:89] found id: "8a4ee0253af3725554a381aa7b6a8f89dbafb548f3de5993565a3866b25f01fc"
	I0920 18:36:51.104134  506753 cri.go:89] found id: ""
	I0920 18:36:51.104159  506753 logs.go:276] 2 containers: [fdda1626f0e17b465e141e79489b20add7508ca46cf9cd2f835cf37bb6674225 8a4ee0253af3725554a381aa7b6a8f89dbafb548f3de5993565a3866b25f01fc]
	I0920 18:36:51.104251  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:51.108435  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:51.112388  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0920 18:36:51.112464  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 18:36:51.167472  506753 cri.go:89] found id: "9bc5c63f723351b1d6ef80d461357d2aa2f2e9617925dad553809fae62b51f84"
	I0920 18:36:51.167496  506753 cri.go:89] found id: "e126978f873c21258f64888c111ebc9d582eb0398c12b5d85458a1909b5cbe76"
	I0920 18:36:51.167501  506753 cri.go:89] found id: ""
	I0920 18:36:51.167510  506753 logs.go:276] 2 containers: [9bc5c63f723351b1d6ef80d461357d2aa2f2e9617925dad553809fae62b51f84 e126978f873c21258f64888c111ebc9d582eb0398c12b5d85458a1909b5cbe76]
	I0920 18:36:51.167568  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:51.171433  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:51.174978  506753 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 18:36:51.175097  506753 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 18:36:51.213083  506753 cri.go:89] found id: "b12b0d4c8fc78ca871279a8f41127290f53d614f970d7916b20ce43b14066509"
	I0920 18:36:51.213104  506753 cri.go:89] found id: ""
	I0920 18:36:51.213112  506753 logs.go:276] 1 containers: [b12b0d4c8fc78ca871279a8f41127290f53d614f970d7916b20ce43b14066509]
	I0920 18:36:51.213173  506753 ssh_runner.go:195] Run: which crictl
	I0920 18:36:51.216824  506753 logs.go:123] Gathering logs for kubernetes-dashboard [b12b0d4c8fc78ca871279a8f41127290f53d614f970d7916b20ce43b14066509] ...
	I0920 18:36:51.216849  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b12b0d4c8fc78ca871279a8f41127290f53d614f970d7916b20ce43b14066509"
	I0920 18:36:51.279731  506753 logs.go:123] Gathering logs for kube-apiserver [e9fb3ff5b999ec2b982365086ff02d308b31e23e032a6a83b361204f78b0edf6] ...
	I0920 18:36:51.279762  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9fb3ff5b999ec2b982365086ff02d308b31e23e032a6a83b361204f78b0edf6"
	I0920 18:36:51.349953  506753 logs.go:123] Gathering logs for etcd [233df6b6eb986baede4db039e732ee1858dd85728e913ffd728598b107989bfd] ...
	I0920 18:36:51.349996  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 233df6b6eb986baede4db039e732ee1858dd85728e913ffd728598b107989bfd"
	I0920 18:36:51.410094  506753 logs.go:123] Gathering logs for coredns [bba0d98fd29865d157261c8482ac52d89e6ffc1f1e5b727ff5d3d5262a37fd32] ...
	I0920 18:36:51.410124  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bba0d98fd29865d157261c8482ac52d89e6ffc1f1e5b727ff5d3d5262a37fd32"
	I0920 18:36:51.471385  506753 logs.go:123] Gathering logs for coredns [3e5b0a7d9d673d78cc6bc4fb42b865a56a854a554083c93d5469d52ce1613872] ...
	I0920 18:36:51.471415  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e5b0a7d9d673d78cc6bc4fb42b865a56a854a554083c93d5469d52ce1613872"
	I0920 18:36:51.548492  506753 logs.go:123] Gathering logs for kindnet [8a4ee0253af3725554a381aa7b6a8f89dbafb548f3de5993565a3866b25f01fc] ...
	I0920 18:36:51.548522  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a4ee0253af3725554a381aa7b6a8f89dbafb548f3de5993565a3866b25f01fc"
	I0920 18:36:51.647765  506753 logs.go:123] Gathering logs for storage-provisioner [e126978f873c21258f64888c111ebc9d582eb0398c12b5d85458a1909b5cbe76] ...
	I0920 18:36:51.647796  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e126978f873c21258f64888c111ebc9d582eb0398c12b5d85458a1909b5cbe76"
	I0920 18:36:51.701179  506753 logs.go:123] Gathering logs for dmesg ...
	I0920 18:36:51.701210  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:36:51.746660  506753 logs.go:123] Gathering logs for kube-apiserver [819a0e2152e9284aada7d30f67dd8a064d3cd2804d41229ca0e7af02aa77f0c0] ...
	I0920 18:36:51.746690  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 819a0e2152e9284aada7d30f67dd8a064d3cd2804d41229ca0e7af02aa77f0c0"
	I0920 18:36:51.844920  506753 logs.go:123] Gathering logs for etcd [b41a67962b9045e22b2075573e3f11825adf11fb40f066ced5d6a9c9b4586c99] ...
	I0920 18:36:51.844958  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b41a67962b9045e22b2075573e3f11825adf11fb40f066ced5d6a9c9b4586c99"
	I0920 18:36:51.899426  506753 logs.go:123] Gathering logs for kube-scheduler [199b68be37f6dec0c3e145251b4cf0f7dc80451e8df9799cc519d5f26245c333] ...
	I0920 18:36:51.899459  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 199b68be37f6dec0c3e145251b4cf0f7dc80451e8df9799cc519d5f26245c333"
	I0920 18:36:51.951489  506753 logs.go:123] Gathering logs for storage-provisioner [9bc5c63f723351b1d6ef80d461357d2aa2f2e9617925dad553809fae62b51f84] ...
	I0920 18:36:51.951521  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bc5c63f723351b1d6ef80d461357d2aa2f2e9617925dad553809fae62b51f84"
	I0920 18:36:52.004471  506753 logs.go:123] Gathering logs for kubelet ...
	I0920 18:36:52.004500  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 18:36:52.064540  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:17 old-k8s-version-475170 kubelet[662]: E0920 18:31:17.390835     662 reflector.go:138] object-"kube-system"/"kindnet-token-t5zmp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-t5zmp" is forbidden: User "system:node:old-k8s-version-475170" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-475170' and this object
	W0920 18:36:52.064778  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:17 old-k8s-version-475170 kubelet[662]: E0920 18:31:17.391083     662 reflector.go:138] object-"kube-system"/"coredns-token-4tb5x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-4tb5x" is forbidden: User "system:node:old-k8s-version-475170" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-475170' and this object
	W0920 18:36:52.064987  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:17 old-k8s-version-475170 kubelet[662]: E0920 18:31:17.391195     662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-475170" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-475170' and this object
	W0920 18:36:52.065300  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:17 old-k8s-version-475170 kubelet[662]: E0920 18:31:17.391282     662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-7wm24": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-7wm24" is forbidden: User "system:node:old-k8s-version-475170" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-475170' and this object
	W0920 18:36:52.065536  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:17 old-k8s-version-475170 kubelet[662]: E0920 18:31:17.391349     662 reflector.go:138] object-"kube-system"/"metrics-server-token-ftv7g": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-ftv7g" is forbidden: User "system:node:old-k8s-version-475170" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-475170' and this object
	W0920 18:36:52.065741  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:17 old-k8s-version-475170 kubelet[662]: E0920 18:31:17.391412     662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-475170" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-475170' and this object
	W0920 18:36:52.065953  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:17 old-k8s-version-475170 kubelet[662]: E0920 18:31:17.391421     662 reflector.go:138] object-"default"/"default-token-pflvr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pflvr" is forbidden: User "system:node:old-k8s-version-475170" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-475170' and this object
	W0920 18:36:52.066168  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:17 old-k8s-version-475170 kubelet[662]: E0920 18:31:17.391476     662 reflector.go:138] object-"kube-system"/"kube-proxy-token-s4d9s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-s4d9s" is forbidden: User "system:node:old-k8s-version-475170" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-475170' and this object
	W0920 18:36:52.074825  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:21 old-k8s-version-475170 kubelet[662]: E0920 18:31:21.571773     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0920 18:36:52.075060  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:21 old-k8s-version-475170 kubelet[662]: E0920 18:31:21.781672     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.079411  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:43 old-k8s-version-475170 kubelet[662]: E0920 18:31:43.014693     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0920 18:36:52.080004  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:44 old-k8s-version-475170 kubelet[662]: E0920 18:31:44.859444     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.080332  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:45 old-k8s-version-475170 kubelet[662]: E0920 18:31:45.862927     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.080663  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:47 old-k8s-version-475170 kubelet[662]: E0920 18:31:47.458620     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.081179  506753 logs.go:138] Found kubelet problem: Sep 20 18:31:56 old-k8s-version-475170 kubelet[662]: E0920 18:31:56.563261     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.082120  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:02 old-k8s-version-475170 kubelet[662]: E0920 18:32:02.910262     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.082512  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:07 old-k8s-version-475170 kubelet[662]: E0920 18:32:07.458841     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.085080  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:07 old-k8s-version-475170 kubelet[662]: E0920 18:32:07.571677     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0920 18:36:52.085421  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:20 old-k8s-version-475170 kubelet[662]: E0920 18:32:20.564228     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.085607  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:21 old-k8s-version-475170 kubelet[662]: E0920 18:32:21.560781     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.086193  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:35 old-k8s-version-475170 kubelet[662]: E0920 18:32:35.002687     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.086376  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:36 old-k8s-version-475170 kubelet[662]: E0920 18:32:36.562114     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.086704  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:37 old-k8s-version-475170 kubelet[662]: E0920 18:32:37.458573     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.087040  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:47 old-k8s-version-475170 kubelet[662]: E0920 18:32:47.560274     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.089545  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:50 old-k8s-version-475170 kubelet[662]: E0920 18:32:50.573726     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0920 18:36:52.089879  506753 logs.go:138] Found kubelet problem: Sep 20 18:32:59 old-k8s-version-475170 kubelet[662]: E0920 18:32:59.560273     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.090076  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:01 old-k8s-version-475170 kubelet[662]: E0920 18:33:01.561247     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.090412  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:13 old-k8s-version-475170 kubelet[662]: E0920 18:33:13.560334     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.090597  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:16 old-k8s-version-475170 kubelet[662]: E0920 18:33:16.560810     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.091348  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:29 old-k8s-version-475170 kubelet[662]: E0920 18:33:29.189104     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.091538  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:29 old-k8s-version-475170 kubelet[662]: E0920 18:33:29.560759     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.091868  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:37 old-k8s-version-475170 kubelet[662]: E0920 18:33:37.458800     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.092051  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:41 old-k8s-version-475170 kubelet[662]: E0920 18:33:41.560924     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.092377  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:49 old-k8s-version-475170 kubelet[662]: E0920 18:33:49.560399     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.092560  506753 logs.go:138] Found kubelet problem: Sep 20 18:33:56 old-k8s-version-475170 kubelet[662]: E0920 18:33:56.560918     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.092888  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:01 old-k8s-version-475170 kubelet[662]: E0920 18:34:01.560297     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.093071  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:08 old-k8s-version-475170 kubelet[662]: E0920 18:34:08.565922     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.093394  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:13 old-k8s-version-475170 kubelet[662]: E0920 18:34:13.560350     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.095843  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:19 old-k8s-version-475170 kubelet[662]: E0920 18:34:19.569001     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0920 18:36:52.096169  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:24 old-k8s-version-475170 kubelet[662]: E0920 18:34:24.564378     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.096353  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:30 old-k8s-version-475170 kubelet[662]: E0920 18:34:30.570033     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.096678  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:38 old-k8s-version-475170 kubelet[662]: E0920 18:34:38.560886     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.096868  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:41 old-k8s-version-475170 kubelet[662]: E0920 18:34:41.568748     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.097452  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:53 old-k8s-version-475170 kubelet[662]: E0920 18:34:53.437729     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.097637  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:56 old-k8s-version-475170 kubelet[662]: E0920 18:34:56.560875     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.097960  506753 logs.go:138] Found kubelet problem: Sep 20 18:34:57 old-k8s-version-475170 kubelet[662]: E0920 18:34:57.458858     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.098285  506753 logs.go:138] Found kubelet problem: Sep 20 18:35:09 old-k8s-version-475170 kubelet[662]: E0920 18:35:09.560326     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.098471  506753 logs.go:138] Found kubelet problem: Sep 20 18:35:09 old-k8s-version-475170 kubelet[662]: E0920 18:35:09.561513     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.098656  506753 logs.go:138] Found kubelet problem: Sep 20 18:35:21 old-k8s-version-475170 kubelet[662]: E0920 18:35:21.560735     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.098982  506753 logs.go:138] Found kubelet problem: Sep 20 18:35:22 old-k8s-version-475170 kubelet[662]: E0920 18:35:22.560289     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.099198  506753 logs.go:138] Found kubelet problem: Sep 20 18:35:34 old-k8s-version-475170 kubelet[662]: E0920 18:35:34.560606     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.099528  506753 logs.go:138] Found kubelet problem: Sep 20 18:35:37 old-k8s-version-475170 kubelet[662]: E0920 18:35:37.560351     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.099711  506753 logs.go:138] Found kubelet problem: Sep 20 18:35:47 old-k8s-version-475170 kubelet[662]: E0920 18:35:47.560695     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.100096  506753 logs.go:138] Found kubelet problem: Sep 20 18:35:51 old-k8s-version-475170 kubelet[662]: E0920 18:35:51.560316     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.100299  506753 logs.go:138] Found kubelet problem: Sep 20 18:36:00 old-k8s-version-475170 kubelet[662]: E0920 18:36:00.571086     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.100627  506753 logs.go:138] Found kubelet problem: Sep 20 18:36:06 old-k8s-version-475170 kubelet[662]: E0920 18:36:06.560426     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.100811  506753 logs.go:138] Found kubelet problem: Sep 20 18:36:15 old-k8s-version-475170 kubelet[662]: E0920 18:36:15.560776     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.101138  506753 logs.go:138] Found kubelet problem: Sep 20 18:36:20 old-k8s-version-475170 kubelet[662]: E0920 18:36:20.565096     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.101322  506753 logs.go:138] Found kubelet problem: Sep 20 18:36:28 old-k8s-version-475170 kubelet[662]: E0920 18:36:28.561796     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.101649  506753 logs.go:138] Found kubelet problem: Sep 20 18:36:31 old-k8s-version-475170 kubelet[662]: E0920 18:36:31.560613     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.101834  506753 logs.go:138] Found kubelet problem: Sep 20 18:36:39 old-k8s-version-475170 kubelet[662]: E0920 18:36:39.560758     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.102158  506753 logs.go:138] Found kubelet problem: Sep 20 18:36:45 old-k8s-version-475170 kubelet[662]: E0920 18:36:45.560462     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.102342  506753 logs.go:138] Found kubelet problem: Sep 20 18:36:50 old-k8s-version-475170 kubelet[662]: E0920 18:36:50.560674     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0920 18:36:52.102352  506753 logs.go:123] Gathering logs for kube-scheduler [186afb87158e29ba5b33e79e9d46317f8bf2277b41c9c1948bb4fe7bd52cb5ca] ...
	I0920 18:36:52.102369  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 186afb87158e29ba5b33e79e9d46317f8bf2277b41c9c1948bb4fe7bd52cb5ca"
	I0920 18:36:52.182143  506753 logs.go:123] Gathering logs for kube-proxy [101769e8441e423925facf5f93161549531fb8ea432669373cb403b040be6102] ...
	I0920 18:36:52.182175  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101769e8441e423925facf5f93161549531fb8ea432669373cb403b040be6102"
	I0920 18:36:52.230742  506753 logs.go:123] Gathering logs for containerd ...
	I0920 18:36:52.230772  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0920 18:36:52.304450  506753 logs.go:123] Gathering logs for container status ...
	I0920 18:36:52.304534  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:36:52.385355  506753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:36:52.385437  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:36:52.596145  506753 logs.go:123] Gathering logs for kube-proxy [93d93c2e410820672b1234f1012c02eadac7b65e873d66423584e1bf387e045e] ...
	I0920 18:36:52.596231  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93d93c2e410820672b1234f1012c02eadac7b65e873d66423584e1bf387e045e"
	I0920 18:36:52.646915  506753 logs.go:123] Gathering logs for kube-controller-manager [233d68ae9f8fdfbccfa0227891e909d8315f56c07f1a81d70b9939806795f464] ...
	I0920 18:36:52.646941  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 233d68ae9f8fdfbccfa0227891e909d8315f56c07f1a81d70b9939806795f464"
	I0920 18:36:52.734279  506753 logs.go:123] Gathering logs for kube-controller-manager [3537612de5ba307587c7b835c7e961d7204cd3b56b96f9297584d7d664afdcc6] ...
	I0920 18:36:52.734360  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3537612de5ba307587c7b835c7e961d7204cd3b56b96f9297584d7d664afdcc6"
	I0920 18:36:52.834536  506753 logs.go:123] Gathering logs for kindnet [fdda1626f0e17b465e141e79489b20add7508ca46cf9cd2f835cf37bb6674225] ...
	I0920 18:36:52.834576  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdda1626f0e17b465e141e79489b20add7508ca46cf9cd2f835cf37bb6674225"
	I0920 18:36:52.892448  506753 out.go:358] Setting ErrFile to fd 2...
	I0920 18:36:52.892473  506753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 18:36:52.892558  506753 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 18:36:52.892569  506753 out.go:270]   Sep 20 18:36:28 old-k8s-version-475170 kubelet[662]: E0920 18:36:28.561796     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 20 18:36:28 old-k8s-version-475170 kubelet[662]: E0920 18:36:28.561796     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.892693  506753 out.go:270]   Sep 20 18:36:31 old-k8s-version-475170 kubelet[662]: E0920 18:36:31.560613     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	  Sep 20 18:36:31 old-k8s-version-475170 kubelet[662]: E0920 18:36:31.560613     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.892708  506753 out.go:270]   Sep 20 18:36:39 old-k8s-version-475170 kubelet[662]: E0920 18:36:39.560758     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 20 18:36:39 old-k8s-version-475170 kubelet[662]: E0920 18:36:39.560758     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.892730  506753 out.go:270]   Sep 20 18:36:45 old-k8s-version-475170 kubelet[662]: E0920 18:36:45.560462     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	  Sep 20 18:36:45 old-k8s-version-475170 kubelet[662]: E0920 18:36:45.560462     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.892740  506753 out.go:270]   Sep 20 18:36:50 old-k8s-version-475170 kubelet[662]: E0920 18:36:50.560674     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 20 18:36:50 old-k8s-version-475170 kubelet[662]: E0920 18:36:50.560674     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0920 18:36:52.892746  506753 out.go:358] Setting ErrFile to fd 2...
	I0920 18:36:52.892756  506753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:37:02.892915  506753 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0920 18:37:02.908774  506753 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0920 18:37:02.911340  506753 out.go:201] 
	W0920 18:37:02.913428  506753 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0920 18:37:02.913473  506753 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0920 18:37:02.913496  506753 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0920 18:37:02.913506  506753 out.go:270] * 
	* 
	W0920 18:37:02.914882  506753 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:37:02.916769  506753 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-475170 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-475170
helpers_test.go:235: (dbg) docker inspect old-k8s-version-475170:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e1fab191f1e133beac62a793b6bd563fa1cd7bd886b88fcb483ab092f260ce1c",
	        "Created": "2024-09-20T18:27:43.433943504Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 506951,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T18:30:52.95807732Z",
	            "FinishedAt": "2024-09-20T18:30:51.531926164Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/e1fab191f1e133beac62a793b6bd563fa1cd7bd886b88fcb483ab092f260ce1c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e1fab191f1e133beac62a793b6bd563fa1cd7bd886b88fcb483ab092f260ce1c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e1fab191f1e133beac62a793b6bd563fa1cd7bd886b88fcb483ab092f260ce1c/hosts",
	        "LogPath": "/var/lib/docker/containers/e1fab191f1e133beac62a793b6bd563fa1cd7bd886b88fcb483ab092f260ce1c/e1fab191f1e133beac62a793b6bd563fa1cd7bd886b88fcb483ab092f260ce1c-json.log",
	        "Name": "/old-k8s-version-475170",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-475170:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-475170",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/09d0b1443c08b785d7c1e9fbbf7ab7aeb2f8949dc0836e3d5f8bfd5191de3700-init/diff:/var/lib/docker/overlay2/3c4c9ed4137da049c491f1302314a8de7bd30a1897b7cd29bbcd1724ef9b7a93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/09d0b1443c08b785d7c1e9fbbf7ab7aeb2f8949dc0836e3d5f8bfd5191de3700/merged",
	                "UpperDir": "/var/lib/docker/overlay2/09d0b1443c08b785d7c1e9fbbf7ab7aeb2f8949dc0836e3d5f8bfd5191de3700/diff",
	                "WorkDir": "/var/lib/docker/overlay2/09d0b1443c08b785d7c1e9fbbf7ab7aeb2f8949dc0836e3d5f8bfd5191de3700/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-475170",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-475170/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-475170",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-475170",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-475170",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5fecbb382151d6db234f0c8f1f79e1770907b1911c18b70aa7f8762cbce5b898",
	            "SandboxKey": "/var/run/docker/netns/5fecbb382151",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-475170": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0affc8d07ed54260b9b6de02cf5b6039415049b628299127baea7a03238007da",
	                    "EndpointID": "c2ed92551019d6cc795c994f3f5003dbd020dbb95b21e05b9fb998cd03c8dc76",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-475170",
	                        "e1fab191f1e1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-475170 -n old-k8s-version-475170
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-475170 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-475170 logs -n 25: (2.488396562s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-652317                              | cert-expiration-652317   | jenkins | v1.34.0 | 20 Sep 24 18:26 UTC | 20 Sep 24 18:27 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-804673                               | force-systemd-env-804673 | jenkins | v1.34.0 | 20 Sep 24 18:26 UTC | 20 Sep 24 18:26 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-804673                            | force-systemd-env-804673 | jenkins | v1.34.0 | 20 Sep 24 18:26 UTC | 20 Sep 24 18:26 UTC |
	| start   | -p cert-options-222283                                 | cert-options-222283      | jenkins | v1.34.0 | 20 Sep 24 18:26 UTC | 20 Sep 24 18:27 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-222283 ssh                                | cert-options-222283      | jenkins | v1.34.0 | 20 Sep 24 18:27 UTC | 20 Sep 24 18:27 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-222283 -- sudo                         | cert-options-222283      | jenkins | v1.34.0 | 20 Sep 24 18:27 UTC | 20 Sep 24 18:27 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-222283                                 | cert-options-222283      | jenkins | v1.34.0 | 20 Sep 24 18:27 UTC | 20 Sep 24 18:27 UTC |
	| start   | -p old-k8s-version-475170                              | old-k8s-version-475170   | jenkins | v1.34.0 | 20 Sep 24 18:27 UTC | 20 Sep 24 18:30 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-652317                              | cert-expiration-652317   | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-652317                              | cert-expiration-652317   | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	| start   | -p no-preload-949863                                   | no-preload-949863        | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:31 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-475170        | old-k8s-version-475170   | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-475170                              | old-k8s-version-475170   | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-475170             | old-k8s-version-475170   | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-475170                              | old-k8s-version-475170   | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-949863             | no-preload-949863        | jenkins | v1.34.0 | 20 Sep 24 18:31 UTC | 20 Sep 24 18:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-949863                                   | no-preload-949863        | jenkins | v1.34.0 | 20 Sep 24 18:31 UTC | 20 Sep 24 18:31 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-949863                  | no-preload-949863        | jenkins | v1.34.0 | 20 Sep 24 18:31 UTC | 20 Sep 24 18:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-949863                                   | no-preload-949863        | jenkins | v1.34.0 | 20 Sep 24 18:31 UTC | 20 Sep 24 18:36 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| image   | no-preload-949863 image list                           | no-preload-949863        | jenkins | v1.34.0 | 20 Sep 24 18:36 UTC | 20 Sep 24 18:36 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-949863                                   | no-preload-949863        | jenkins | v1.34.0 | 20 Sep 24 18:36 UTC | 20 Sep 24 18:36 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-949863                                   | no-preload-949863        | jenkins | v1.34.0 | 20 Sep 24 18:36 UTC | 20 Sep 24 18:36 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-949863                                   | no-preload-949863        | jenkins | v1.34.0 | 20 Sep 24 18:36 UTC | 20 Sep 24 18:36 UTC |
	| delete  | -p no-preload-949863                                   | no-preload-949863        | jenkins | v1.34.0 | 20 Sep 24 18:36 UTC | 20 Sep 24 18:36 UTC |
	| start   | -p embed-certs-320115                                  | embed-certs-320115       | jenkins | v1.34.0 | 20 Sep 24 18:36 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:36:53
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:36:53.099651  517404 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:36:53.099822  517404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:36:53.099832  517404 out.go:358] Setting ErrFile to fd 2...
	I0920 18:36:53.099837  517404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:36:53.100172  517404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-294290/.minikube/bin
	I0920 18:36:53.100655  517404 out.go:352] Setting JSON to false
	I0920 18:36:53.101675  517404 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8363,"bootTime":1726849050,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0920 18:36:53.101771  517404 start.go:139] virtualization:  
	I0920 18:36:53.104634  517404 out.go:177] * [embed-certs-320115] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 18:36:53.107096  517404 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:36:53.107172  517404 notify.go:220] Checking for updates...
	I0920 18:36:53.111889  517404 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:36:53.113737  517404 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-294290/kubeconfig
	I0920 18:36:53.115736  517404 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-294290/.minikube
	I0920 18:36:53.117507  517404 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 18:36:53.119132  517404 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:36:53.121713  517404 config.go:182] Loaded profile config "old-k8s-version-475170": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0920 18:36:53.121867  517404 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:36:53.149405  517404 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 18:36:53.149787  517404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:36:53.218372  517404 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 18:36:53.20835883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 18:36:53.218477  517404 docker.go:318] overlay module found
	I0920 18:36:53.220764  517404 out.go:177] * Using the docker driver based on user configuration
	I0920 18:36:53.222737  517404 start.go:297] selected driver: docker
	I0920 18:36:53.222756  517404 start.go:901] validating driver "docker" against <nil>
	I0920 18:36:53.222770  517404 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:36:53.223524  517404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:36:53.272809  517404 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 18:36:53.2630508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 18:36:53.273011  517404 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:36:53.273239  517404 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:36:53.275220  517404 out.go:177] * Using Docker driver with root privileges
	I0920 18:36:53.276939  517404 cni.go:84] Creating CNI manager for ""
	I0920 18:36:53.277008  517404 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 18:36:53.277025  517404 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:36:53.277114  517404 start.go:340] cluster config:
	{Name:embed-certs-320115 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-320115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:36:53.279067  517404 out.go:177] * Starting "embed-certs-320115" primary control-plane node in "embed-certs-320115" cluster
	I0920 18:36:53.280633  517404 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0920 18:36:53.282839  517404 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0920 18:36:53.285082  517404 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 18:36:53.285145  517404 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-294290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0920 18:36:53.285159  517404 cache.go:56] Caching tarball of preloaded images
	I0920 18:36:53.285194  517404 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 18:36:53.285245  517404 preload.go:172] Found /home/jenkins/minikube-integration/19672-294290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 18:36:53.285256  517404 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0920 18:36:53.285372  517404 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/embed-certs-320115/config.json ...
	I0920 18:36:53.285390  517404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/embed-certs-320115/config.json: {Name:mkd8a3072a2af9152fa9dc2788f04ffe4bdcf7cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0920 18:36:53.304802  517404 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed is of wrong architecture
	I0920 18:36:53.304826  517404 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 18:36:53.304924  517404 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 18:36:53.304947  517404 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0920 18:36:53.304953  517404 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0920 18:36:53.304964  517404 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0920 18:36:53.304972  517404 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0920 18:36:53.438489  517404 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0920 18:36:53.438542  517404 cache.go:194] Successfully downloaded all kic artifacts
	I0920 18:36:53.438573  517404 start.go:360] acquireMachinesLock for embed-certs-320115: {Name:mk4635f2feccf9ba90a03b81efcf4b8bd48aa25f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:36:53.439143  517404 start.go:364] duration metric: took 538.432µs to acquireMachinesLock for "embed-certs-320115"
	I0920 18:36:53.439182  517404 start.go:93] Provisioning new machine with config: &{Name:embed-certs-320115 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-320115 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0920 18:36:53.439273  517404 start.go:125] createHost starting for "" (driver="docker")
	I0920 18:36:52.230742  506753 logs.go:123] Gathering logs for containerd ...
	I0920 18:36:52.230772  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0920 18:36:52.304450  506753 logs.go:123] Gathering logs for container status ...
	I0920 18:36:52.304534  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:36:52.385355  506753 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:36:52.385437  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:36:52.596145  506753 logs.go:123] Gathering logs for kube-proxy [93d93c2e410820672b1234f1012c02eadac7b65e873d66423584e1bf387e045e] ...
	I0920 18:36:52.596231  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93d93c2e410820672b1234f1012c02eadac7b65e873d66423584e1bf387e045e"
	I0920 18:36:52.646915  506753 logs.go:123] Gathering logs for kube-controller-manager [233d68ae9f8fdfbccfa0227891e909d8315f56c07f1a81d70b9939806795f464] ...
	I0920 18:36:52.646941  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 233d68ae9f8fdfbccfa0227891e909d8315f56c07f1a81d70b9939806795f464"
	I0920 18:36:52.734279  506753 logs.go:123] Gathering logs for kube-controller-manager [3537612de5ba307587c7b835c7e961d7204cd3b56b96f9297584d7d664afdcc6] ...
	I0920 18:36:52.734360  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3537612de5ba307587c7b835c7e961d7204cd3b56b96f9297584d7d664afdcc6"
	I0920 18:36:52.834536  506753 logs.go:123] Gathering logs for kindnet [fdda1626f0e17b465e141e79489b20add7508ca46cf9cd2f835cf37bb6674225] ...
	I0920 18:36:52.834576  506753 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdda1626f0e17b465e141e79489b20add7508ca46cf9cd2f835cf37bb6674225"
	I0920 18:36:52.892448  506753 out.go:358] Setting ErrFile to fd 2...
	I0920 18:36:52.892473  506753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 18:36:52.892558  506753 out.go:270] X Problems detected in kubelet:
	W0920 18:36:52.892569  506753 out.go:270]   Sep 20 18:36:28 old-k8s-version-475170 kubelet[662]: E0920 18:36:28.561796     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.892693  506753 out.go:270]   Sep 20 18:36:31 old-k8s-version-475170 kubelet[662]: E0920 18:36:31.560613     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.892708  506753 out.go:270]   Sep 20 18:36:39 old-k8s-version-475170 kubelet[662]: E0920 18:36:39.560758     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 18:36:52.892730  506753 out.go:270]   Sep 20 18:36:45 old-k8s-version-475170 kubelet[662]: E0920 18:36:45.560462     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	W0920 18:36:52.892740  506753 out.go:270]   Sep 20 18:36:50 old-k8s-version-475170 kubelet[662]: E0920 18:36:50.560674     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0920 18:36:52.892746  506753 out.go:358] Setting ErrFile to fd 2...
	I0920 18:36:52.892756  506753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:36:53.443210  517404 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0920 18:36:53.443533  517404 start.go:159] libmachine.API.Create for "embed-certs-320115" (driver="docker")
	I0920 18:36:53.443573  517404 client.go:168] LocalClient.Create starting
	I0920 18:36:53.443673  517404 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-294290/.minikube/certs/ca.pem
	I0920 18:36:53.443710  517404 main.go:141] libmachine: Decoding PEM data...
	I0920 18:36:53.443728  517404 main.go:141] libmachine: Parsing certificate...
	I0920 18:36:53.443796  517404 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-294290/.minikube/certs/cert.pem
	I0920 18:36:53.443823  517404 main.go:141] libmachine: Decoding PEM data...
	I0920 18:36:53.443837  517404 main.go:141] libmachine: Parsing certificate...
	I0920 18:36:53.444263  517404 cli_runner.go:164] Run: docker network inspect embed-certs-320115 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 18:36:53.460270  517404 cli_runner.go:211] docker network inspect embed-certs-320115 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 18:36:53.460434  517404 network_create.go:284] running [docker network inspect embed-certs-320115] to gather additional debugging logs...
	I0920 18:36:53.460460  517404 cli_runner.go:164] Run: docker network inspect embed-certs-320115
	W0920 18:36:53.475747  517404 cli_runner.go:211] docker network inspect embed-certs-320115 returned with exit code 1
	I0920 18:36:53.475780  517404 network_create.go:287] error running [docker network inspect embed-certs-320115]: docker network inspect embed-certs-320115: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-320115 not found
	I0920 18:36:53.475793  517404 network_create.go:289] output of [docker network inspect embed-certs-320115]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-320115 not found
	
	** /stderr **
	I0920 18:36:53.475891  517404 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 18:36:53.502708  517404 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-758fc8c66451 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:12:0e:b8:19} reservation:<nil>}
	I0920 18:36:53.503308  517404 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bc7d965af1d7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:cf:0d:c8:d3} reservation:<nil>}
	I0920 18:36:53.503672  517404 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6633e4dc314a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:c2:55:69:54} reservation:<nil>}
	I0920 18:36:53.504258  517404 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001861130}
	I0920 18:36:53.504280  517404 network_create.go:124] attempt to create docker network embed-certs-320115 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0920 18:36:53.504364  517404 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-320115 embed-certs-320115
	I0920 18:36:53.582330  517404 network_create.go:108] docker network embed-certs-320115 192.168.76.0/24 created
	I0920 18:36:53.582366  517404 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-320115" container
	I0920 18:36:53.582444  517404 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 18:36:53.601804  517404 cli_runner.go:164] Run: docker volume create embed-certs-320115 --label name.minikube.sigs.k8s.io=embed-certs-320115 --label created_by.minikube.sigs.k8s.io=true
	I0920 18:36:53.623694  517404 oci.go:103] Successfully created a docker volume embed-certs-320115
	I0920 18:36:53.623799  517404 cli_runner.go:164] Run: docker run --rm --name embed-certs-320115-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-320115 --entrypoint /usr/bin/test -v embed-certs-320115:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0920 18:36:54.283109  517404 oci.go:107] Successfully prepared a docker volume embed-certs-320115
	I0920 18:36:54.283164  517404 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 18:36:54.283186  517404 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 18:36:54.283263  517404 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19672-294290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-320115:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 18:37:02.892915  506753 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0920 18:37:02.908774  506753 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0920 18:37:02.911340  506753 out.go:201] 
	W0920 18:37:02.913428  506753 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0920 18:37:02.913473  506753 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0920 18:37:02.913496  506753 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0920 18:37:02.913506  506753 out.go:270] * 
	W0920 18:37:02.914882  506753 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 18:37:02.916769  506753 out.go:201] 
	I0920 18:36:58.581121  517404 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19672-294290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-320115:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (4.297818487s)
	I0920 18:36:58.581167  517404 kic.go:203] duration metric: took 4.297972907s to extract preloaded images to volume ...
	W0920 18:36:58.581330  517404 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 18:36:58.581466  517404 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 18:36:58.639799  517404 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-320115 --name embed-certs-320115 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-320115 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-320115 --network embed-certs-320115 --ip 192.168.76.2 --volume embed-certs-320115:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0920 18:36:58.962977  517404 cli_runner.go:164] Run: docker container inspect embed-certs-320115 --format={{.State.Running}}
	I0920 18:36:58.983445  517404 cli_runner.go:164] Run: docker container inspect embed-certs-320115 --format={{.State.Status}}
	I0920 18:36:59.004014  517404 cli_runner.go:164] Run: docker exec embed-certs-320115 stat /var/lib/dpkg/alternatives/iptables
	I0920 18:36:59.092109  517404 oci.go:144] the created container "embed-certs-320115" has a running status.
	I0920 18:36:59.092151  517404 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19672-294290/.minikube/machines/embed-certs-320115/id_rsa...
	I0920 18:36:59.442964  517404 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19672-294290/.minikube/machines/embed-certs-320115/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 18:36:59.471799  517404 cli_runner.go:164] Run: docker container inspect embed-certs-320115 --format={{.State.Status}}
	I0920 18:36:59.497521  517404 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 18:36:59.497541  517404 kic_runner.go:114] Args: [docker exec --privileged embed-certs-320115 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 18:36:59.596255  517404 cli_runner.go:164] Run: docker container inspect embed-certs-320115 --format={{.State.Status}}
	I0920 18:36:59.633750  517404 machine.go:93] provisionDockerMachine start ...
	I0920 18:36:59.633860  517404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-320115
	I0920 18:36:59.661886  517404 main.go:141] libmachine: Using SSH client type: native
	I0920 18:36:59.662211  517404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I0920 18:36:59.662223  517404 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:36:59.663136  517404 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0920 18:37:02.794848  517404 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-320115
	
	I0920 18:37:02.794883  517404 ubuntu.go:169] provisioning hostname "embed-certs-320115"
	I0920 18:37:02.794949  517404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-320115
	I0920 18:37:02.823168  517404 main.go:141] libmachine: Using SSH client type: native
	I0920 18:37:02.823638  517404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I0920 18:37:02.823656  517404 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-320115 && echo "embed-certs-320115" | sudo tee /etc/hostname
	I0920 18:37:02.993113  517404 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-320115
	
	I0920 18:37:02.993187  517404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-320115
	I0920 18:37:03.058944  517404 main.go:141] libmachine: Using SSH client type: native
	I0920 18:37:03.060615  517404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I0920 18:37:03.060659  517404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-320115' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-320115/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-320115' | sudo tee -a /etc/hosts; 
				fi
			fi
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	ede576265552f       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   a0efff585a993       dashboard-metrics-scraper-8d5bb5db8-pq4wb
	b12b0d4c8fc78       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   e1580a63ce184       kubernetes-dashboard-cd95d586-7hgck
	9bc5c63f72335       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         1                   be88f50945a5e       storage-provisioner
	06e2f5d021682       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   438e739a03a63       busybox
	fdda1626f0e17       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   e15777df47851       kindnet-s7pqm
	101769e8441e4       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   0dfcadfb47ec6       kube-proxy-r9xl5
	bba0d98fd2986       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   9c33e4383dd18       coredns-74ff55c5b-hc5pl
	819a0e2152e92       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   67cdc3675975f       kube-apiserver-old-k8s-version-475170
	233d68ae9f8fd       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   d966aa5991863       kube-controller-manager-old-k8s-version-475170
	186afb87158e2       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   ee56d24e1b2b3       kube-scheduler-old-k8s-version-475170
	b41a67962b904       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   68762caf8ecaa       etcd-old-k8s-version-475170
	abdc058d5bf03       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   1344202376c79       busybox
	3e5b0a7d9d673       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   04df706415e30       coredns-74ff55c5b-hc5pl
	8a4ee0253af37       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   1e5184a405cd7       kindnet-s7pqm
	e126978f873c2       ba04bb24b9575       8 minutes ago       Exited              storage-provisioner         0                   ce54f659ac3ff       storage-provisioner
	93d93c2e41082       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   af05d5fb3dc40       kube-proxy-r9xl5
	233df6b6eb986       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   97703aa4f7aff       etcd-old-k8s-version-475170
	e9fb3ff5b999e       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   8c4369c306753       kube-apiserver-old-k8s-version-475170
	3537612de5ba3       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   4fa363b408416       kube-controller-manager-old-k8s-version-475170
	199b68be37f6d       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   92c2fa2c62c67       kube-scheduler-old-k8s-version-475170
	
	
	==> containerd <==
	Sep 20 18:33:28 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:33:28.588503467Z" level=info msg="CreateContainer within sandbox \"a0efff585a993a5af1af07e4f2aec72ba9af2fdedc9b19fa4141923724aac4cf\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"7ea05fbc489fcc2fd44ac2ed876a2c16e599f11fdbcba289c8443511302eee03\""
	Sep 20 18:33:28 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:33:28.589142884Z" level=info msg="StartContainer for \"7ea05fbc489fcc2fd44ac2ed876a2c16e599f11fdbcba289c8443511302eee03\""
	Sep 20 18:33:28 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:33:28.661438222Z" level=info msg="StartContainer for \"7ea05fbc489fcc2fd44ac2ed876a2c16e599f11fdbcba289c8443511302eee03\" returns successfully"
	Sep 20 18:33:28 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:33:28.685570104Z" level=info msg="shim disconnected" id=7ea05fbc489fcc2fd44ac2ed876a2c16e599f11fdbcba289c8443511302eee03 namespace=k8s.io
	Sep 20 18:33:28 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:33:28.685634736Z" level=warning msg="cleaning up after shim disconnected" id=7ea05fbc489fcc2fd44ac2ed876a2c16e599f11fdbcba289c8443511302eee03 namespace=k8s.io
	Sep 20 18:33:28 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:33:28.685644590Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 20 18:33:29 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:33:29.203361031Z" level=info msg="RemoveContainer for \"458fe621bc5b0396e9f90fd9ee16fa72a3eae9bbe4f71493d2f7892b485f8654\""
	Sep 20 18:33:29 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:33:29.211533686Z" level=info msg="RemoveContainer for \"458fe621bc5b0396e9f90fd9ee16fa72a3eae9bbe4f71493d2f7892b485f8654\" returns successfully"
	Sep 20 18:34:19 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:34:19.560836592Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 18:34:19 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:34:19.566562504Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 20 18:34:19 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:34:19.568106764Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 20 18:34:19 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:34:19.568195608Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 20 18:34:52 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:34:52.566873722Z" level=info msg="CreateContainer within sandbox \"a0efff585a993a5af1af07e4f2aec72ba9af2fdedc9b19fa4141923724aac4cf\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Sep 20 18:34:52 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:34:52.582783127Z" level=info msg="CreateContainer within sandbox \"a0efff585a993a5af1af07e4f2aec72ba9af2fdedc9b19fa4141923724aac4cf\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"ede576265552fc52c31896d621c5509c0d403980d968d25712de91548bd4cced\""
	Sep 20 18:34:52 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:34:52.583436723Z" level=info msg="StartContainer for \"ede576265552fc52c31896d621c5509c0d403980d968d25712de91548bd4cced\""
	Sep 20 18:34:52 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:34:52.649797066Z" level=info msg="StartContainer for \"ede576265552fc52c31896d621c5509c0d403980d968d25712de91548bd4cced\" returns successfully"
	Sep 20 18:34:52 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:34:52.674217855Z" level=info msg="shim disconnected" id=ede576265552fc52c31896d621c5509c0d403980d968d25712de91548bd4cced namespace=k8s.io
	Sep 20 18:34:52 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:34:52.674285596Z" level=warning msg="cleaning up after shim disconnected" id=ede576265552fc52c31896d621c5509c0d403980d968d25712de91548bd4cced namespace=k8s.io
	Sep 20 18:34:52 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:34:52.674298494Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 20 18:34:53 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:34:53.444591588Z" level=info msg="RemoveContainer for \"7ea05fbc489fcc2fd44ac2ed876a2c16e599f11fdbcba289c8443511302eee03\""
	Sep 20 18:34:53 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:34:53.450019013Z" level=info msg="RemoveContainer for \"7ea05fbc489fcc2fd44ac2ed876a2c16e599f11fdbcba289c8443511302eee03\" returns successfully"
	Sep 20 18:37:01 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:37:01.561313679Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 18:37:01 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:37:01.567159572Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 20 18:37:01 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:37:01.568904704Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 20 18:37:01 old-k8s-version-475170 containerd[569]: time="2024-09-20T18:37:01.568920056Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [3e5b0a7d9d673d78cc6bc4fb42b865a56a854a554083c93d5469d52ce1613872] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:58556 - 12422 "HINFO IN 5817523817575646683.2101472075602393078. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025602109s
	
	
	==> coredns [bba0d98fd29865d157261c8482ac52d89e6ffc1f1e5b727ff5d3d5262a37fd32] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:51031 - 60533 "HINFO IN 6952676865435153296.5597418273855234125. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005053874s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0920 18:31:49.633081       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-20 18:31:19.631850609 +0000 UTC m=+0.022610062) (total time: 30.001115039s):
	Trace[2019727887]: [30.001115039s] [30.001115039s] END
	E0920 18:31:49.633896       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0920 18:31:49.634451       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-20 18:31:19.63284753 +0000 UTC m=+0.023606983) (total time: 30.001578555s):
	Trace[939984059]: [30.001578555s] [30.001578555s] END
	E0920 18:31:49.634595       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0920 18:31:49.634757       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-20 18:31:19.632615145 +0000 UTC m=+0.023374598) (total time: 30.002080355s):
	Trace[911902081]: [30.002080355s] [30.002080355s] END
	E0920 18:31:49.634836       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-475170
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-475170
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=old-k8s-version-475170
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_28_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:28:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-475170
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:37:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:32:17 +0000   Fri, 20 Sep 2024 18:28:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:32:17 +0000   Fri, 20 Sep 2024 18:28:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:32:17 +0000   Fri, 20 Sep 2024 18:28:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:32:17 +0000   Fri, 20 Sep 2024 18:28:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-475170
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d58842631dd42d1ad44339ae2202e0b
	  System UUID:                ad9b70f1-6a67-44f4-901d-bcd96df814a1
	  Boot ID:                    b363b069-6c72-47b0-a80b-36cf6b75e261
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 coredns-74ff55c5b-hc5pl                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m28s
	  kube-system                 etcd-old-k8s-version-475170                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m35s
	  kube-system                 kindnet-s7pqm                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m28s
	  kube-system                 kube-apiserver-old-k8s-version-475170             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 kube-controller-manager-old-k8s-version-475170    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 kube-proxy-r9xl5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-scheduler-old-k8s-version-475170             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 metrics-server-9975d5f86-2scnf                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m25s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m27s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-pq4wb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-7hgck               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m55s (x5 over 8m55s)  kubelet     Node old-k8s-version-475170 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m55s (x4 over 8m55s)  kubelet     Node old-k8s-version-475170 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m55s (x4 over 8m55s)  kubelet     Node old-k8s-version-475170 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m36s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m36s                  kubelet     Node old-k8s-version-475170 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m36s                  kubelet     Node old-k8s-version-475170 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m36s                  kubelet     Node old-k8s-version-475170 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m36s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m28s                  kubelet     Node old-k8s-version-475170 status is now: NodeReady
	  Normal  Starting                 8m27s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m56s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m56s (x8 over 5m56s)  kubelet     Node old-k8s-version-475170 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m56s (x7 over 5m56s)  kubelet     Node old-k8s-version-475170 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m56s (x8 over 5m56s)  kubelet     Node old-k8s-version-475170 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m56s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m44s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Sep20 17:08] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [233df6b6eb986baede4db039e732ee1858dd85728e913ffd728598b107989bfd] <==
	raft2024/09/20 18:28:10 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/09/20 18:28:10 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/09/20 18:28:10 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/09/20 18:28:10 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-09-20 18:28:10.453493 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-20 18:28:10.456790 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-20 18:28:10.456980 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-20 18:28:10.457075 I | etcdserver: published {Name:old-k8s-version-475170 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-09-20 18:28:10.457441 I | embed: ready to serve client requests
	2024-09-20 18:28:10.457558 I | embed: ready to serve client requests
	2024-09-20 18:28:10.458979 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-20 18:28:10.472166 I | embed: serving client requests on 192.168.85.2:2379
	2024-09-20 18:28:34.418513 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:28:43.355990 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:28:53.355902 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:29:03.355906 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:29:13.355950 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:29:23.355871 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:29:33.355989 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:29:43.356000 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:29:53.355931 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:30:03.356060 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:30:13.356535 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:30:23.356093 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:30:33.355939 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [b41a67962b9045e22b2075573e3f11825adf11fb40f066ced5d6a9c9b4586c99] <==
	2024-09-20 18:33:04.355099 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:33:14.355055 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:33:24.355115 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:33:34.355194 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:33:44.355505 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:33:54.355142 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:34:04.355140 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:34:14.355797 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:34:24.355357 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:34:34.355118 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:34:44.355191 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:34:54.355145 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:35:04.355094 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:35:14.355091 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:35:24.355051 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:35:34.355158 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:35:44.355104 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:35:54.355142 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:36:04.355083 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:36:14.355051 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:36:24.355290 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:36:34.355139 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:36:44.355198 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:36:54.355146 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 18:37:04.355331 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 18:37:04 up  2:19,  0 users,  load average: 0.68, 1.63, 2.39
	Linux old-k8s-version-475170 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [8a4ee0253af3725554a381aa7b6a8f89dbafb548f3de5993565a3866b25f01fc] <==
	I0920 18:28:41.212917       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0920 18:28:41.212962       1 metrics.go:61] Registering metrics
	I0920 18:28:41.213048       1 controller.go:374] Syncing nftables rules
	I0920 18:28:51.012482       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:28:51.012528       1 main.go:299] handling current node
	I0920 18:29:01.012590       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:29:01.012637       1 main.go:299] handling current node
	I0920 18:29:11.012784       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:29:11.013382       1 main.go:299] handling current node
	I0920 18:29:21.021408       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:29:21.021447       1 main.go:299] handling current node
	I0920 18:29:31.021343       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:29:31.021384       1 main.go:299] handling current node
	I0920 18:29:41.012604       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:29:41.012647       1 main.go:299] handling current node
	I0920 18:29:51.012115       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:29:51.012276       1 main.go:299] handling current node
	I0920 18:30:01.015635       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:30:01.015678       1 main.go:299] handling current node
	I0920 18:30:11.012883       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:30:11.012932       1 main.go:299] handling current node
	I0920 18:30:21.019901       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:30:21.019945       1 main.go:299] handling current node
	I0920 18:30:31.012457       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:30:31.012506       1 main.go:299] handling current node
	
	
	==> kindnet [fdda1626f0e17b465e141e79489b20add7508ca46cf9cd2f835cf37bb6674225] <==
	I0920 18:35:00.711861       1 main.go:299] handling current node
	I0920 18:35:10.720754       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:35:10.720792       1 main.go:299] handling current node
	I0920 18:35:20.711829       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:35:20.711866       1 main.go:299] handling current node
	I0920 18:35:30.715514       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:35:30.715551       1 main.go:299] handling current node
	I0920 18:35:40.720868       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:35:40.720906       1 main.go:299] handling current node
	I0920 18:35:50.720403       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:35:50.720443       1 main.go:299] handling current node
	I0920 18:36:00.719802       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:36:00.719849       1 main.go:299] handling current node
	I0920 18:36:10.715137       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:36:10.715177       1 main.go:299] handling current node
	I0920 18:36:20.712192       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:36:20.712227       1 main.go:299] handling current node
	I0920 18:36:30.711972       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:36:30.712032       1 main.go:299] handling current node
	I0920 18:36:40.719800       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:36:40.719836       1 main.go:299] handling current node
	I0920 18:36:50.719109       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:36:50.719142       1 main.go:299] handling current node
	I0920 18:37:00.712524       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0920 18:37:00.712559       1 main.go:299] handling current node
	
	
	==> kube-apiserver [819a0e2152e9284aada7d30f67dd8a064d3cd2804d41229ca0e7af02aa77f0c0] <==
	I0920 18:33:48.021452       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 18:33:48.021462       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0920 18:34:20.230021       1 handler_proxy.go:102] no RequestInfo found in the context
	E0920 18:34:20.230101       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0920 18:34:20.230112       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:34:24.925789       1 client.go:360] parsed scheme: "passthrough"
	I0920 18:34:24.925834       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 18:34:24.925844       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0920 18:34:55.888029       1 client.go:360] parsed scheme: "passthrough"
	I0920 18:34:55.888073       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 18:34:55.888232       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0920 18:35:34.258821       1 client.go:360] parsed scheme: "passthrough"
	I0920 18:35:34.258864       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 18:35:34.258873       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0920 18:36:07.947794       1 client.go:360] parsed scheme: "passthrough"
	I0920 18:36:07.947837       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 18:36:07.947846       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0920 18:36:18.455678       1 handler_proxy.go:102] no RequestInfo found in the context
	E0920 18:36:18.455878       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0920 18:36:18.455896       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:36:51.579570       1 client.go:360] parsed scheme: "passthrough"
	I0920 18:36:51.579646       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 18:36:51.579656       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [e9fb3ff5b999ec2b982365086ff02d308b31e23e032a6a83b361204f78b0edf6] <==
	I0920 18:28:18.011289       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0920 18:28:18.011503       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0920 18:28:18.022737       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0920 18:28:18.028570       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0920 18:28:18.028604       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0920 18:28:18.554855       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 18:28:18.620140       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0920 18:28:18.744349       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0920 18:28:18.745473       1 controller.go:606] quota admission added evaluator for: endpoints
	I0920 18:28:18.749491       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 18:28:19.038923       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 18:28:19.759094       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0920 18:28:20.240768       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0920 18:28:20.305427       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0920 18:28:36.065993       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0920 18:28:36.082560       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0920 18:28:52.469520       1 client.go:360] parsed scheme: "passthrough"
	I0920 18:28:52.469566       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 18:28:52.469575       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0920 18:29:34.360992       1 client.go:360] parsed scheme: "passthrough"
	I0920 18:29:34.361037       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 18:29:34.361076       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0920 18:30:18.679543       1 client.go:360] parsed scheme: "passthrough"
	I0920 18:30:18.679586       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 18:30:18.679595       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [233d68ae9f8fdfbccfa0227891e909d8315f56c07f1a81d70b9939806795f464] <==
	W0920 18:32:42.951059       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 18:33:06.578738       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 18:33:14.601547       1 request.go:655] Throttling request took 1.048360332s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0920 18:33:15.453169       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 18:33:37.080646       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 18:33:47.103772       1 request.go:655] Throttling request took 1.047929481s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0920 18:33:47.955221       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 18:34:07.582565       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 18:34:19.655776       1 request.go:655] Throttling request took 1.012866939s, request: GET:https://192.168.85.2:8443/apis/policy/v1beta1?timeout=32s
	W0920 18:34:20.457214       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 18:34:38.084524       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 18:34:52.106893       1 request.go:655] Throttling request took 1.048463442s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0920 18:34:52.958620       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 18:35:08.586359       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 18:35:24.609057       1 request.go:655] Throttling request took 1.047919266s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0920 18:35:25.460593       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 18:35:39.087598       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 18:35:57.111131       1 request.go:655] Throttling request took 1.04827259s, request: GET:https://192.168.85.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0920 18:35:57.962554       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 18:36:09.589111       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 18:36:29.613099       1 request.go:655] Throttling request took 1.022359613s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0920 18:36:30.464732       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 18:36:40.091295       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 18:37:02.115245       1 request.go:655] Throttling request took 1.048448228s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0920 18:37:02.969481       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [3537612de5ba307587c7b835c7e961d7204cd3b56b96f9297584d7d664afdcc6] <==
	I0920 18:28:36.105914       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0920 18:28:36.106209       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0920 18:28:36.106535       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	W0920 18:28:36.106952       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-475170. Assuming now as a timestamp.
	I0920 18:28:36.107194       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	I0920 18:28:36.106718       1 event.go:291] "Event occurred" object="old-k8s-version-475170" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-475170 event: Registered Node old-k8s-version-475170 in Controller"
	I0920 18:28:36.111254       1 shared_informer.go:247] Caches are synced for job 
	I0920 18:28:36.120650       1 shared_informer.go:247] Caches are synced for GC 
	I0920 18:28:36.127108       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0920 18:28:36.138695       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-54pl7"
	I0920 18:28:36.171390       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-r9xl5"
	I0920 18:28:36.172172       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-s7pqm"
	I0920 18:28:36.172203       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-hc5pl"
	I0920 18:28:36.313087       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0920 18:28:36.604921       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0920 18:28:36.604949       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0920 18:28:36.613280       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0920 18:28:36.751851       1 request.go:655] Throttling request took 1.037897608s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	I0920 18:28:37.553697       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0920 18:28:37.553766       1 shared_informer.go:247] Caches are synced for resource quota 
	I0920 18:28:37.662477       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0920 18:28:37.678379       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-54pl7"
	I0920 18:30:38.742997       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0920 18:30:38.995719       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	E0920 18:30:39.412499       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server could not find the requested resource
	
	
	==> kube-proxy [101769e8441e423925facf5f93161549531fb8ea432669373cb403b040be6102] <==
	I0920 18:31:20.023407       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0920 18:31:20.023597       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0920 18:31:20.068842       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0920 18:31:20.068940       1 server_others.go:185] Using iptables Proxier.
	I0920 18:31:20.069152       1 server.go:650] Version: v1.20.0
	I0920 18:31:20.069685       1 config.go:315] Starting service config controller
	I0920 18:31:20.069700       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0920 18:31:20.071609       1 config.go:224] Starting endpoint slice config controller
	I0920 18:31:20.071622       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0920 18:31:20.169861       1 shared_informer.go:247] Caches are synced for service config 
	I0920 18:31:20.172735       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [93d93c2e410820672b1234f1012c02eadac7b65e873d66423584e1bf387e045e] <==
	I0920 18:28:37.108878       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0920 18:28:37.109008       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0920 18:28:37.140206       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0920 18:28:37.140308       1 server_others.go:185] Using iptables Proxier.
	I0920 18:28:37.140524       1 server.go:650] Version: v1.20.0
	I0920 18:28:37.141032       1 config.go:315] Starting service config controller
	I0920 18:28:37.141051       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0920 18:28:37.142947       1 config.go:224] Starting endpoint slice config controller
	I0920 18:28:37.142967       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0920 18:28:37.242390       1 shared_informer.go:247] Caches are synced for service config 
	I0920 18:28:37.246762       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [186afb87158e29ba5b33e79e9d46317f8bf2277b41c9c1948bb4fe7bd52cb5ca] <==
	I0920 18:31:12.507775       1 serving.go:331] Generated self-signed cert in-memory
	W0920 18:31:17.188153       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 18:31:17.188192       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:31:17.188201       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:31:17.188213       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:31:17.540934       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0920 18:31:17.544450       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:31:17.544483       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:31:17.544502       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0920 18:31:17.744758       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [199b68be37f6dec0c3e145251b4cf0f7dc80451e8df9799cc519d5f26245c333] <==
	W0920 18:28:17.241028       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 18:28:17.241155       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 18:28:17.241242       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 18:28:17.311291       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0920 18:28:17.314186       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0920 18:28:17.314446       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 18:28:17.314526       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 18:28:17.318158       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 18:28:17.318265       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 18:28:17.335629       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:28:17.335686       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 18:28:17.335776       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 18:28:17.336041       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 18:28:17.339198       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 18:28:17.339555       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 18:28:17.341646       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 18:28:17.341873       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:28:17.342037       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:28:17.342728       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 18:28:18.298174       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 18:28:18.298576       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:28:18.345455       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 18:28:18.373067       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 18:28:18.506008       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0920 18:28:21.714763       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 20 18:35:34 old-k8s-version-475170 kubelet[662]: E0920 18:35:34.560606     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 18:35:37 old-k8s-version-475170 kubelet[662]: I0920 18:35:37.559965     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: ede576265552fc52c31896d621c5509c0d403980d968d25712de91548bd4cced
	Sep 20 18:35:37 old-k8s-version-475170 kubelet[662]: E0920 18:35:37.560351     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	Sep 20 18:35:47 old-k8s-version-475170 kubelet[662]: E0920 18:35:47.560695     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 18:35:51 old-k8s-version-475170 kubelet[662]: I0920 18:35:51.559938     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: ede576265552fc52c31896d621c5509c0d403980d968d25712de91548bd4cced
	Sep 20 18:35:51 old-k8s-version-475170 kubelet[662]: E0920 18:35:51.560316     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	Sep 20 18:36:00 old-k8s-version-475170 kubelet[662]: E0920 18:36:00.571086     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 18:36:06 old-k8s-version-475170 kubelet[662]: I0920 18:36:06.560029     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: ede576265552fc52c31896d621c5509c0d403980d968d25712de91548bd4cced
	Sep 20 18:36:06 old-k8s-version-475170 kubelet[662]: E0920 18:36:06.560426     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	Sep 20 18:36:15 old-k8s-version-475170 kubelet[662]: E0920 18:36:15.560776     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 18:36:20 old-k8s-version-475170 kubelet[662]: I0920 18:36:20.564020     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: ede576265552fc52c31896d621c5509c0d403980d968d25712de91548bd4cced
	Sep 20 18:36:20 old-k8s-version-475170 kubelet[662]: E0920 18:36:20.565096     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	Sep 20 18:36:28 old-k8s-version-475170 kubelet[662]: E0920 18:36:28.561796     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 18:36:31 old-k8s-version-475170 kubelet[662]: I0920 18:36:31.560135     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: ede576265552fc52c31896d621c5509c0d403980d968d25712de91548bd4cced
	Sep 20 18:36:31 old-k8s-version-475170 kubelet[662]: E0920 18:36:31.560613     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	Sep 20 18:36:39 old-k8s-version-475170 kubelet[662]: E0920 18:36:39.560758     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 18:36:45 old-k8s-version-475170 kubelet[662]: I0920 18:36:45.560049     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: ede576265552fc52c31896d621c5509c0d403980d968d25712de91548bd4cced
	Sep 20 18:36:45 old-k8s-version-475170 kubelet[662]: E0920 18:36:45.560462     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	Sep 20 18:36:50 old-k8s-version-475170 kubelet[662]: E0920 18:36:50.560674     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 18:36:58 old-k8s-version-475170 kubelet[662]: I0920 18:36:58.560287     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: ede576265552fc52c31896d621c5509c0d403980d968d25712de91548bd4cced
	Sep 20 18:36:58 old-k8s-version-475170 kubelet[662]: E0920 18:36:58.560632     662 pod_workers.go:191] Error syncing pod 9896dc7c-456c-43f9-93c9-fd1c5545c9f5 ("dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pq4wb_kubernetes-dashboard(9896dc7c-456c-43f9-93c9-fd1c5545c9f5)"
	Sep 20 18:37:01 old-k8s-version-475170 kubelet[662]: E0920 18:37:01.569163     662 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Sep 20 18:37:01 old-k8s-version-475170 kubelet[662]: E0920 18:37:01.569217     662 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Sep 20 18:37:01 old-k8s-version-475170 kubelet[662]: E0920 18:37:01.569360     662 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-ftv7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-2scnf_kube-system(e58bdba
3-1017-4138-8586-cd842ea4f482): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Sep 20 18:37:01 old-k8s-version-475170 kubelet[662]: E0920 18:37:01.569397     662 pod_workers.go:191] Error syncing pod e58bdba3-1017-4138-8586-cd842ea4f482 ("metrics-server-9975d5f86-2scnf_kube-system(e58bdba3-1017-4138-8586-cd842ea4f482)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> kubernetes-dashboard [b12b0d4c8fc78ca871279a8f41127290f53d614f970d7916b20ce43b14066509] <==
	2024/09/20 18:31:39 Using namespace: kubernetes-dashboard
	2024/09/20 18:31:39 Using in-cluster config to connect to apiserver
	2024/09/20 18:31:39 Using secret token for csrf signing
	2024/09/20 18:31:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/20 18:31:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/20 18:31:39 Successful initial request to the apiserver, version: v1.20.0
	2024/09/20 18:31:39 Generating JWE encryption key
	2024/09/20 18:31:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/20 18:31:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/20 18:31:40 Initializing JWE encryption key from synchronized object
	2024/09/20 18:31:40 Creating in-cluster Sidecar client
	2024/09/20 18:31:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 18:31:40 Serving insecurely on HTTP port: 9090
	2024/09/20 18:32:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 18:32:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 18:33:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 18:33:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 18:34:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 18:34:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 18:35:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 18:35:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 18:36:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 18:36:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 18:31:39 Starting overwatch
	
	
	==> storage-provisioner [9bc5c63f723351b1d6ef80d461357d2aa2f2e9617925dad553809fae62b51f84] <==
	I0920 18:31:21.541580       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:31:21.556008       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:31:21.556083       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:31:39.040265       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:31:39.040715       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6991ee8c-7691-472b-a374-c5e79023fe8b", APIVersion:"v1", ResourceVersion:"794", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-475170_ab8ef3ce-47a3-4b31-a7fb-46cdff7406d2 became leader
	I0920 18:31:39.043260       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-475170_ab8ef3ce-47a3-4b31-a7fb-46cdff7406d2!
	I0920 18:31:39.154254       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-475170_ab8ef3ce-47a3-4b31-a7fb-46cdff7406d2!
	
	
	==> storage-provisioner [e126978f873c21258f64888c111ebc9d582eb0398c12b5d85458a1909b5cbe76] <==
	I0920 18:28:38.230255       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:28:38.247294       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:28:38.247353       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:28:38.265757       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:28:38.268349       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6991ee8c-7691-472b-a374-c5e79023fe8b", APIVersion:"v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-475170_af4bb4dc-bf17-4283-984a-22a078c5e848 became leader
	I0920 18:28:38.268567       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-475170_af4bb4dc-bf17-4283-984a-22a078c5e848!
	I0920 18:28:38.369624       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-475170_af4bb4dc-bf17-4283-984a-22a078c5e848!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-475170 -n old-k8s-version-475170
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-475170 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-2scnf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-475170 describe pod metrics-server-9975d5f86-2scnf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-475170 describe pod metrics-server-9975d5f86-2scnf: exit status 1 (110.481729ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-2scnf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-475170 describe pod metrics-server-9975d5f86-2scnf: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (374.59s)

                                                
                                    

Test pass (298/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.65
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 6.08
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 218.23
31 TestAddons/serial/GCPAuth/Namespaces 0.2
33 TestAddons/parallel/Registry 16.02
34 TestAddons/parallel/Ingress 19.83
35 TestAddons/parallel/InspektorGadget 12.2
36 TestAddons/parallel/MetricsServer 5.79
38 TestAddons/parallel/CSI 52.49
39 TestAddons/parallel/Headlamp 11.33
40 TestAddons/parallel/CloudSpanner 5.58
41 TestAddons/parallel/LocalPath 54.95
42 TestAddons/parallel/NvidiaDevicePlugin 6.67
43 TestAddons/parallel/Yakd 11.89
44 TestAddons/StoppedEnableDisable 12.31
45 TestCertOptions 38.6
46 TestCertExpiration 229.51
48 TestForceSystemdFlag 39.2
49 TestForceSystemdEnv 41.69
50 TestDockerEnvContainerd 45.02
55 TestErrorSpam/setup 29.91
56 TestErrorSpam/start 0.71
57 TestErrorSpam/status 0.98
58 TestErrorSpam/pause 1.79
59 TestErrorSpam/unpause 1.9
60 TestErrorSpam/stop 1.48
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 51.22
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 5.93
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.09
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.94
72 TestFunctional/serial/CacheCmd/cache/add_local 1.2
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
76 TestFunctional/serial/CacheCmd/cache/cache_reload 2.11
77 TestFunctional/serial/CacheCmd/cache/delete 0.15
78 TestFunctional/serial/MinikubeKubectlCmd 0.16
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
80 TestFunctional/serial/ExtraConfig 64.81
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.66
83 TestFunctional/serial/LogsFileCmd 1.71
84 TestFunctional/serial/InvalidService 4.39
86 TestFunctional/parallel/ConfigCmd 0.46
87 TestFunctional/parallel/DashboardCmd 10.69
88 TestFunctional/parallel/DryRun 0.42
89 TestFunctional/parallel/InternationalLanguage 0.19
90 TestFunctional/parallel/StatusCmd 1.21
94 TestFunctional/parallel/ServiceCmdConnect 9.66
95 TestFunctional/parallel/AddonsCmd 0.19
96 TestFunctional/parallel/PersistentVolumeClaim 25.08
98 TestFunctional/parallel/SSHCmd 0.69
99 TestFunctional/parallel/CpCmd 2.28
101 TestFunctional/parallel/FileSync 0.33
102 TestFunctional/parallel/CertSync 2.11
106 TestFunctional/parallel/NodeLabels 0.08
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
110 TestFunctional/parallel/License 0.26
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.44
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
124 TestFunctional/parallel/ServiceCmd/List 0.59
125 TestFunctional/parallel/ProfileCmd/profile_list 0.54
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.65
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
129 TestFunctional/parallel/MountCmd/any-port 8.3
130 TestFunctional/parallel/ServiceCmd/Format 0.46
131 TestFunctional/parallel/ServiceCmd/URL 0.51
132 TestFunctional/parallel/MountCmd/specific-port 2.16
133 TestFunctional/parallel/MountCmd/VerifyCleanup 1.66
134 TestFunctional/parallel/Version/short 0.06
135 TestFunctional/parallel/Version/components 1.26
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.37
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.98
141 TestFunctional/parallel/ImageCommands/Setup 0.68
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.37
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.38
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.55
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.01
158 TestMultiControlPlane/serial/StartCluster 121.48
159 TestMultiControlPlane/serial/DeployApp 36.04
160 TestMultiControlPlane/serial/PingHostFromPods 1.78
161 TestMultiControlPlane/serial/AddWorkerNode 22.18
162 TestMultiControlPlane/serial/NodeLabels 0.11
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.01
164 TestMultiControlPlane/serial/CopyFile 18.87
165 TestMultiControlPlane/serial/StopSecondaryNode 12.9
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.77
167 TestMultiControlPlane/serial/RestartSecondaryNode 18.08
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.39
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 153.23
170 TestMultiControlPlane/serial/DeleteSecondaryNode 10.54
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
172 TestMultiControlPlane/serial/StopCluster 36.03
173 TestMultiControlPlane/serial/RestartCluster 79.16
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
175 TestMultiControlPlane/serial/AddSecondaryNode 46.44
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.03
180 TestJSONOutput/start/Command 53.5
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.75
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.67
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 1.27
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.22
205 TestKicCustomNetwork/create_custom_network 36.68
206 TestKicCustomNetwork/use_default_bridge_network 34.18
207 TestKicExistingNetwork 33.68
208 TestKicCustomSubnet 35.67
209 TestKicStaticIP 37.4
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 69.76
214 TestMountStart/serial/StartWithMountFirst 7.52
215 TestMountStart/serial/VerifyMountFirst 0.25
216 TestMountStart/serial/StartWithMountSecond 5.82
217 TestMountStart/serial/VerifyMountSecond 0.26
218 TestMountStart/serial/DeleteFirst 1.7
219 TestMountStart/serial/VerifyMountPostDelete 0.28
220 TestMountStart/serial/Stop 1.2
221 TestMountStart/serial/RestartStopped 7.44
222 TestMountStart/serial/VerifyMountPostStop 0.26
225 TestMultiNode/serial/FreshStart2Nodes 68.45
226 TestMultiNode/serial/DeployApp2Nodes 15.05
227 TestMultiNode/serial/PingHostFrom2Pods 1.04
228 TestMultiNode/serial/AddNode 17.18
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.66
231 TestMultiNode/serial/CopyFile 10.02
232 TestMultiNode/serial/StopNode 2.22
233 TestMultiNode/serial/StartAfterStop 9.64
234 TestMultiNode/serial/RestartKeepsNodes 93.04
235 TestMultiNode/serial/DeleteNode 5.54
236 TestMultiNode/serial/StopMultiNode 24.05
237 TestMultiNode/serial/RestartMultiNode 50.66
238 TestMultiNode/serial/ValidateNameConflict 36.88
243 TestPreload 126.95
245 TestScheduledStopUnix 109
248 TestInsufficientStorage 10.04
249 TestRunningBinaryUpgrade 84.06
251 TestKubernetesUpgrade 351.96
252 TestMissingContainerUpgrade 179.73
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 40.78
256 TestNoKubernetes/serial/StartWithStopK8s 19.27
257 TestNoKubernetes/serial/Start 6.14
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
259 TestNoKubernetes/serial/ProfileList 0.98
260 TestNoKubernetes/serial/Stop 1.22
261 TestNoKubernetes/serial/StartNoArgs 6.53
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
263 TestStoppedBinaryUpgrade/Setup 0.64
264 TestStoppedBinaryUpgrade/Upgrade 104.59
265 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
274 TestPause/serial/Start 53.54
275 TestPause/serial/SecondStartNoReconfiguration 7.44
276 TestPause/serial/Pause 0.98
277 TestPause/serial/VerifyStatus 0.35
278 TestPause/serial/Unpause 0.87
279 TestPause/serial/PauseAgain 1.56
280 TestPause/serial/DeletePaused 3.44
281 TestPause/serial/VerifyDeletedResources 0.49
289 TestNetworkPlugins/group/false 4.62
294 TestStartStop/group/old-k8s-version/serial/FirstStart 171.37
296 TestStartStop/group/no-preload/serial/FirstStart 62.08
297 TestStartStop/group/old-k8s-version/serial/DeployApp 9.86
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.7
299 TestStartStop/group/old-k8s-version/serial/Stop 12.42
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.28
302 TestStartStop/group/no-preload/serial/DeployApp 10.44
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
304 TestStartStop/group/no-preload/serial/Stop 12.11
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
306 TestStartStop/group/no-preload/serial/SecondStart 290.65
307 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
308 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
309 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
310 TestStartStop/group/no-preload/serial/Pause 3.11
312 TestStartStop/group/embed-certs/serial/FirstStart 59.9
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
316 TestStartStop/group/old-k8s-version/serial/Pause 4.43
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 91.48
319 TestStartStop/group/embed-certs/serial/DeployApp 9.48
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.62
321 TestStartStop/group/embed-certs/serial/Stop 12.33
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
323 TestStartStop/group/embed-certs/serial/SecondStart 266.45
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.43
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.2
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.15
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 269.9
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
332 TestStartStop/group/embed-certs/serial/Pause 3.11
334 TestStartStop/group/newest-cni/serial/FirstStart 37.37
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
337 TestStartStop/group/newest-cni/serial/Stop 1.26
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
339 TestStartStop/group/newest-cni/serial/SecondStart 17.35
340 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
341 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.21
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
345 TestStartStop/group/newest-cni/serial/Pause 3.52
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.25
348 TestNetworkPlugins/group/auto/Start 55.22
349 TestNetworkPlugins/group/kindnet/Start 91.14
350 TestNetworkPlugins/group/auto/KubeletFlags 0.45
351 TestNetworkPlugins/group/auto/NetCatPod 9.61
352 TestNetworkPlugins/group/auto/DNS 0.2
353 TestNetworkPlugins/group/auto/Localhost 0.16
354 TestNetworkPlugins/group/auto/HairPin 0.16
355 TestNetworkPlugins/group/calico/Start 63.61
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
358 TestNetworkPlugins/group/kindnet/NetCatPod 10.41
359 TestNetworkPlugins/group/kindnet/DNS 0.23
360 TestNetworkPlugins/group/kindnet/Localhost 0.18
361 TestNetworkPlugins/group/kindnet/HairPin 0.22
362 TestNetworkPlugins/group/custom-flannel/Start 57.22
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.39
365 TestNetworkPlugins/group/calico/NetCatPod 10.33
366 TestNetworkPlugins/group/calico/DNS 0.26
367 TestNetworkPlugins/group/calico/Localhost 0.28
368 TestNetworkPlugins/group/calico/HairPin 0.25
369 TestNetworkPlugins/group/enable-default-cni/Start 78.3
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.43
372 TestNetworkPlugins/group/custom-flannel/DNS 0.23
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.25
375 TestNetworkPlugins/group/flannel/Start 53.91
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.38
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
383 TestNetworkPlugins/group/flannel/NetCatPod 9.38
384 TestNetworkPlugins/group/flannel/DNS 0.21
385 TestNetworkPlugins/group/bridge/Start 47.78
386 TestNetworkPlugins/group/flannel/Localhost 0.22
387 TestNetworkPlugins/group/flannel/HairPin 0.22
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
389 TestNetworkPlugins/group/bridge/NetCatPod 10.26
390 TestNetworkPlugins/group/bridge/DNS 0.17
391 TestNetworkPlugins/group/bridge/Localhost 0.15
392 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (7.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-095043 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-095043 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.6461541s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 17:39:42.527869  299684 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0920 17:39:42.527955  299684 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-294290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-095043
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-095043: exit status 85 (69.623215ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-095043 | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC |          |
	|         | -p download-only-095043        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:39:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:39:34.925959  299690 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:39:34.926134  299690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:39:34.926143  299690 out.go:358] Setting ErrFile to fd 2...
	I0920 17:39:34.926149  299690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:39:34.926388  299690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-294290/.minikube/bin
	W0920 17:39:34.926525  299690 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19672-294290/.minikube/config/config.json: open /home/jenkins/minikube-integration/19672-294290/.minikube/config/config.json: no such file or directory
	I0920 17:39:34.926919  299690 out.go:352] Setting JSON to true
	I0920 17:39:34.927837  299690 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4925,"bootTime":1726849050,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0920 17:39:34.927911  299690 start.go:139] virtualization:  
	I0920 17:39:34.931464  299690 out.go:97] [download-only-095043] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0920 17:39:34.931682  299690 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-294290/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 17:39:34.931721  299690 notify.go:220] Checking for updates...
	I0920 17:39:34.934380  299690 out.go:169] MINIKUBE_LOCATION=19672
	I0920 17:39:34.936755  299690 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:39:34.938969  299690 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-294290/kubeconfig
	I0920 17:39:34.941471  299690 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-294290/.minikube
	I0920 17:39:34.943796  299690 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 17:39:34.948484  299690 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 17:39:34.948781  299690 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:39:34.980658  299690 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 17:39:34.980767  299690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:39:35.036193  299690 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 17:39:35.024825345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:39:35.036302  299690 docker.go:318] overlay module found
	I0920 17:39:35.038937  299690 out.go:97] Using the docker driver based on user configuration
	I0920 17:39:35.038976  299690 start.go:297] selected driver: docker
	I0920 17:39:35.038985  299690 start.go:901] validating driver "docker" against <nil>
	I0920 17:39:35.039128  299690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:39:35.097150  299690 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 17:39:35.08707221 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:39:35.097379  299690 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:39:35.097680  299690 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 17:39:35.097859  299690 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 17:39:35.100232  299690 out.go:169] Using Docker driver with root privileges
	I0920 17:39:35.102607  299690 cni.go:84] Creating CNI manager for ""
	I0920 17:39:35.102690  299690 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 17:39:35.102707  299690 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 17:39:35.102814  299690 start.go:340] cluster config:
	{Name:download-only-095043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-095043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:39:35.105298  299690 out.go:97] Starting "download-only-095043" primary control-plane node in "download-only-095043" cluster
	I0920 17:39:35.105342  299690 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0920 17:39:35.107605  299690 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0920 17:39:35.107654  299690 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0920 17:39:35.107806  299690 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 17:39:35.125144  299690 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 17:39:35.125363  299690 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 17:39:35.125480  299690 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 17:39:35.167627  299690 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0920 17:39:35.167655  299690 cache.go:56] Caching tarball of preloaded images
	I0920 17:39:35.167840  299690 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0920 17:39:35.170188  299690 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 17:39:35.170225  299690 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0920 17:39:35.262926  299690 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19672-294290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0920 17:39:39.466929  299690 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	
	
	* The control-plane node download-only-095043 host does not exist
	  To start a cluster, run: "minikube start -p download-only-095043"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-095043
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-824252 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-824252 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.074988886s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 17:39:49.022142  299684 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I0920 17:39:49.022182  299684 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-294290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-824252
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-824252: exit status 85 (66.640365ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-095043 | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC |                     |
	|         | -p download-only-095043        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC | 20 Sep 24 17:39 UTC |
	| delete  | -p download-only-095043        | download-only-095043 | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC | 20 Sep 24 17:39 UTC |
	| start   | -o=json --download-only        | download-only-824252 | jenkins | v1.34.0 | 20 Sep 24 17:39 UTC |                     |
	|         | -p download-only-824252        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:39:42
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:39:42.991069  299885 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:39:42.991205  299885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:39:42.991217  299885 out.go:358] Setting ErrFile to fd 2...
	I0920 17:39:42.991223  299885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:39:42.991478  299885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-294290/.minikube/bin
	I0920 17:39:42.991882  299885 out.go:352] Setting JSON to true
	I0920 17:39:42.992754  299885 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4933,"bootTime":1726849050,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0920 17:39:42.992827  299885 start.go:139] virtualization:  
	I0920 17:39:42.995246  299885 out.go:97] [download-only-824252] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 17:39:42.995428  299885 notify.go:220] Checking for updates...
	I0920 17:39:42.997082  299885 out.go:169] MINIKUBE_LOCATION=19672
	I0920 17:39:42.998995  299885 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:39:43.000712  299885 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-294290/kubeconfig
	I0920 17:39:43.002513  299885 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-294290/.minikube
	I0920 17:39:43.004488  299885 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 17:39:43.009692  299885 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 17:39:43.010036  299885 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:39:43.038220  299885 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 17:39:43.038334  299885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:39:43.096261  299885 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 17:39:43.086900355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:39:43.096372  299885 docker.go:318] overlay module found
	I0920 17:39:43.098589  299885 out.go:97] Using the docker driver based on user configuration
	I0920 17:39:43.098614  299885 start.go:297] selected driver: docker
	I0920 17:39:43.098620  299885 start.go:901] validating driver "docker" against <nil>
	I0920 17:39:43.098724  299885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:39:43.149558  299885 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 17:39:43.14059848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:39:43.149774  299885 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:39:43.150067  299885 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 17:39:43.150231  299885 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 17:39:43.153386  299885 out.go:169] Using Docker driver with root privileges
	I0920 17:39:43.155503  299885 cni.go:84] Creating CNI manager for ""
	I0920 17:39:43.155566  299885 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 17:39:43.155579  299885 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 17:39:43.155674  299885 start.go:340] cluster config:
	{Name:download-only-824252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-824252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:39:43.158025  299885 out.go:97] Starting "download-only-824252" primary control-plane node in "download-only-824252" cluster
	I0920 17:39:43.158055  299885 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0920 17:39:43.160198  299885 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0920 17:39:43.160234  299885 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 17:39:43.160411  299885 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 17:39:43.176089  299885 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 17:39:43.176221  299885 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 17:39:43.176246  299885 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0920 17:39:43.176251  299885 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0920 17:39:43.176259  299885 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0920 17:39:43.216532  299885 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0920 17:39:43.216557  299885 cache.go:56] Caching tarball of preloaded images
	I0920 17:39:43.216723  299885 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 17:39:43.219033  299885 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0920 17:39:43.219059  299885 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0920 17:39:43.303201  299885 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19672-294290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0920 17:39:47.272719  299885 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0920 17:39:47.272879  299885 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19672-294290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-824252 host does not exist
	  To start a cluster, run: "minikube start -p download-only-824252"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-824252
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 17:39:50.254183  299684 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-765360 --alsologtostderr --binary-mirror http://127.0.0.1:44001 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-765360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-765360
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-545041
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-545041: exit status 85 (69.610428ms)

                                                
                                                
-- stdout --
	* Profile "addons-545041" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-545041"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-545041
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-545041: exit status 85 (67.06735ms)

                                                
                                                
-- stdout --
	* Profile "addons-545041" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-545041"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (218.23s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-545041 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-545041 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m38.226380688s)
--- PASS: TestAddons/Setup (218.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-545041 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-545041 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 4.428253ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-n7tg6" [33f18f34-de1d-4cf8-8ecc-e9e8a5dcbaff] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004507279s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qshpk" [8a023632-d1b9-4e68-b4bd-f9572a1acfe5] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004235237s
addons_test.go:338: (dbg) Run:  kubectl --context addons-545041 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-545041 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Done: kubectl --context addons-545041 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.05028236s)
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-545041 ip
2024/09/20 17:47:24 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-545041 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.02s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-545041 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-545041 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-545041 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c139d458-23f4-4a91-b383-f87fea5ce6e9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c139d458-23f4-4a91-b383-f87fea5ce6e9] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.018557509s
I0920 17:48:42.027688  299684 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-545041 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-545041 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-545041 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-545041 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-545041 addons disable ingress-dns --alsologtostderr -v=1: (1.239613648s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-545041 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-545041 addons disable ingress --alsologtostderr -v=1: (7.843213635s)
--- PASS: TestAddons/parallel/Ingress (19.83s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.2s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-p64mf" [0f2f6a7e-76fb-4f1c-8d6d-4194c4f76a6c] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004339958s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-545041
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-545041: (6.193267266s)
--- PASS: TestAddons/parallel/InspektorGadget (12.20s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.079981ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-dmqpl" [c269bb8d-903d-4258-9fd2-fff102c918ac] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003900186s
addons_test.go:413: (dbg) Run:  kubectl --context addons-545041 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-545041 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 17:47:20.879809  299684 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 17:47:20.885109  299684 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 17:47:20.885142  299684 kapi.go:107] duration metric: took 8.181963ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 8.191621ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-545041 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-545041 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [223fbd9e-1355-4afb-98a3-17916018db85] Pending
helpers_test.go:344: "task-pv-pod" [223fbd9e-1355-4afb-98a3-17916018db85] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [223fbd9e-1355-4afb-98a3-17916018db85] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.005001598s
addons_test.go:528: (dbg) Run:  kubectl --context addons-545041 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-545041 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-545041 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-545041 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-545041 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-545041 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-545041 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [432a19a4-ec06-42c6-a5c3-1ed8223879fd] Pending
helpers_test.go:344: "task-pv-pod-restore" [432a19a4-ec06-42c6-a5c3-1ed8223879fd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [432a19a4-ec06-42c6-a5c3-1ed8223879fd] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00344989s
addons_test.go:570: (dbg) Run:  kubectl --context addons-545041 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-545041 delete pod task-pv-pod-restore: (1.196015226s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-545041 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-545041 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-545041 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-545041 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.775235491s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-545041 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.49s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-545041 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-lrnbt" [29d8ecc2-c59b-47af-bfad-b998b1df552d] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-lrnbt" [29d8ecc2-c59b-47af-bfad-b998b1df552d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-lrnbt" [29d8ecc2-c59b-47af-bfad-b998b1df552d] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003629934s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-545041 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (11.33s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-82z2s" [dd7b6d52-0790-46c0-a874-c6216c21dba1] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004486139s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-545041
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.95s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-545041 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-545041 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545041 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8d734a09-8bd4-406b-909a-38677ce7589f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8d734a09-8bd4-406b-909a-38677ce7589f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8d734a09-8bd4-406b-909a-38677ce7589f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004119781s
addons_test.go:938: (dbg) Run:  kubectl --context addons-545041 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-545041 ssh "cat /opt/local-path-provisioner/pvc-5e5eacf6-b6d0-428a-bf15-0dc9e3e3a5c1_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-545041 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-545041 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-545041 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-545041 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.615744557s)
--- PASS: TestAddons/parallel/LocalPath (54.95s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7lb5s" [b1b02761-6700-494a-9306-c64f38225c4a] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00377194s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-545041
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.67s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-k954s" [e3f0b423-cbf1-43c1-9bea-898704f18145] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00306523s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-545041 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-545041 addons disable yakd --alsologtostderr -v=1: (5.885080108s)
--- PASS: TestAddons/parallel/Yakd (11.89s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-545041
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-545041: (12.049920298s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-545041
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-545041
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-545041
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (38.6s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-222283 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-222283 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.935688363s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-222283 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-222283 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-222283 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-222283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-222283
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-222283: (1.992501764s)
--- PASS: TestCertOptions (38.60s)

                                                
                                    
x
+
TestCertExpiration (229.51s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-652317 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E0920 18:26:32.223804  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-652317 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (39.926455791s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-652317 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-652317 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.215932862s)
helpers_test.go:175: Cleaning up "cert-expiration-652317" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-652317
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-652317: (2.363290051s)
--- PASS: TestCertExpiration (229.51s)

                                                
                                    
x
+
TestForceSystemdFlag (39.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-691908 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-691908 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.703397413s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-691908 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-691908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-691908
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-691908: (2.142937085s)
--- PASS: TestForceSystemdFlag (39.20s)

                                                
                                    
x
+
TestForceSystemdEnv (41.69s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-804673 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-804673 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.845362196s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-804673 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-804673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-804673
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-804673: (2.446687402s)
--- PASS: TestForceSystemdEnv (41.69s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.02s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-612919 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-612919 --driver=docker  --container-runtime=containerd: (29.563229854s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-612919"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-qeGXCVEThuFB/agent.318673" SSH_AGENT_PID="318674" DOCKER_HOST=ssh://docker@127.0.0.1:33144 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-qeGXCVEThuFB/agent.318673" SSH_AGENT_PID="318674" DOCKER_HOST=ssh://docker@127.0.0.1:33144 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-qeGXCVEThuFB/agent.318673" SSH_AGENT_PID="318674" DOCKER_HOST=ssh://docker@127.0.0.1:33144 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.194079841s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-qeGXCVEThuFB/agent.318673" SSH_AGENT_PID="318674" DOCKER_HOST=ssh://docker@127.0.0.1:33144 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-612919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-612919
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-612919: (1.913369536s)
--- PASS: TestDockerEnvContainerd (45.02s)

                                                
                                    
x
+
TestErrorSpam/setup (29.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-388798 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-388798 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-388798 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-388798 --driver=docker  --container-runtime=containerd: (29.91121449s)
--- PASS: TestErrorSpam/setup (29.91s)

                                                
                                    
x
+
TestErrorSpam/start (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388798 --log_dir /tmp/nospam-388798 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388798 --log_dir /tmp/nospam-388798 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388798 --log_dir /tmp/nospam-388798 start --dry-run
--- PASS: TestErrorSpam/start (0.71s)

                                                
                                    
x
+
TestErrorSpam/status (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388798 --log_dir /tmp/nospam-388798 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388798 --log_dir /tmp/nospam-388798 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388798 --log_dir /tmp/nospam-388798 status
--- PASS: TestErrorSpam/status (0.98s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388798 --log_dir /tmp/nospam-388798 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388798 --log_dir /tmp/nospam-388798 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388798 --log_dir /tmp/nospam-388798 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388798 --log_dir /tmp/nospam-388798 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388798 --log_dir /tmp/nospam-388798 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388798 --log_dir /tmp/nospam-388798 unpause
--- PASS: TestErrorSpam/unpause (1.90s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388798 --log_dir /tmp/nospam-388798 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-388798 --log_dir /tmp/nospam-388798 stop: (1.296963995s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388798 --log_dir /tmp/nospam-388798 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-388798 --log_dir /tmp/nospam-388798 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19672-294290/.minikube/files/etc/test/nested/copy/299684/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-129075 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-129075 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (51.216954274s)
--- PASS: TestFunctional/serial/StartWithProxy (51.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.93s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 17:51:28.445573  299684 config.go:182] Loaded profile config "functional-129075": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-129075 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-129075 --alsologtostderr -v=8: (5.922643456s)
functional_test.go:663: soft start took 5.924932021s for "functional-129075" cluster.
I0920 17:51:34.368521  299684 config.go:182] Loaded profile config "functional-129075": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (5.93s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-129075 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-129075 cache add registry.k8s.io/pause:3.1: (1.470905121s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-129075 cache add registry.k8s.io/pause:3.3: (1.368220892s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-129075 cache add registry.k8s.io/pause:latest: (1.104640241s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-129075 /tmp/TestFunctionalserialCacheCmdcacheadd_local2819423242/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 cache add minikube-local-cache-test:functional-129075
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 cache delete minikube-local-cache-test:functional-129075
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-129075
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-129075 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (277.608648ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-129075 cache reload: (1.164121105s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 kubectl -- --context functional-129075 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-129075 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (64.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-129075 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-129075 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m4.808537214s)
functional_test.go:761: restart took 1m4.808652062s for "functional-129075" cluster.
I0920 17:52:47.461695  299684 config.go:182] Loaded profile config "functional-129075": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (64.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-129075 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-129075 logs: (1.657664464s)
--- PASS: TestFunctional/serial/LogsCmd (1.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 logs --file /tmp/TestFunctionalserialLogsFileCmd1339446693/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-129075 logs --file /tmp/TestFunctionalserialLogsFileCmd1339446693/001/logs.txt: (1.704630532s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-129075 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-129075
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-129075: exit status 115 (669.154636ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31976 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-129075 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-129075 config get cpus: exit status 14 (67.152996ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-129075 config get cpus: exit status 14 (75.564424ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-129075 --alsologtostderr -v=1]
E0920 17:53:29.146597  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:29.153072  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:29.164498  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:29.185831  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:29.229498  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:29.313122  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:29.474622  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:29.796378  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-129075 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 333514: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.69s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-129075 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-129075 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (194.282476ms)

                                                
                                                
-- stdout --
	* [functional-129075] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-294290/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-294290/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:53:26.993368  333213 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:53:26.993539  333213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:53:26.993563  333213 out.go:358] Setting ErrFile to fd 2...
	I0920 17:53:26.993570  333213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:53:26.993916  333213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-294290/.minikube/bin
	I0920 17:53:26.994301  333213 out.go:352] Setting JSON to false
	I0920 17:53:26.999974  333213 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5757,"bootTime":1726849050,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0920 17:53:27.000062  333213 start.go:139] virtualization:  
	I0920 17:53:27.002530  333213 out.go:177] * [functional-129075] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 17:53:27.004489  333213 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:53:27.004560  333213 notify.go:220] Checking for updates...
	I0920 17:53:27.006957  333213 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:53:27.009210  333213 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-294290/kubeconfig
	I0920 17:53:27.011911  333213 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-294290/.minikube
	I0920 17:53:27.014468  333213 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 17:53:27.016963  333213 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:53:27.019701  333213 config.go:182] Loaded profile config "functional-129075": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 17:53:27.020345  333213 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:53:27.057620  333213 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 17:53:27.057753  333213 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:53:27.125928  333213 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 17:53:27.11526716 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:53:27.126041  333213 docker.go:318] overlay module found
	I0920 17:53:27.128387  333213 out.go:177] * Using the docker driver based on existing profile
	I0920 17:53:27.130288  333213 start.go:297] selected driver: docker
	I0920 17:53:27.130315  333213 start.go:901] validating driver "docker" against &{Name:functional-129075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-129075 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:53:27.130434  333213 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:53:27.133047  333213 out.go:201] 
	W0920 17:53:27.135136  333213 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 17:53:27.137116  333213 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-129075 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-129075 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-129075 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (187.030062ms)

                                                
                                                
-- stdout --
	* [functional-129075] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-294290/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-294290/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:53:26.816295  333169 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:53:26.816523  333169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:53:26.816559  333169 out.go:358] Setting ErrFile to fd 2...
	I0920 17:53:26.816579  333169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:53:26.816966  333169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-294290/.minikube/bin
	I0920 17:53:26.817379  333169 out.go:352] Setting JSON to false
	I0920 17:53:26.818424  333169 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5757,"bootTime":1726849050,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0920 17:53:26.818531  333169 start.go:139] virtualization:  
	I0920 17:53:26.822468  333169 out.go:177] * [functional-129075] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0920 17:53:26.824283  333169 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:53:26.824351  333169 notify.go:220] Checking for updates...
	I0920 17:53:26.828188  333169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:53:26.829897  333169 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-294290/kubeconfig
	I0920 17:53:26.831832  333169 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-294290/.minikube
	I0920 17:53:26.833698  333169 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 17:53:26.835785  333169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:53:26.838411  333169 config.go:182] Loaded profile config "functional-129075": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 17:53:26.838943  333169 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:53:26.871827  333169 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 17:53:26.871960  333169 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:53:26.930199  333169 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 17:53:26.918765812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:53:26.930318  333169 docker.go:318] overlay module found
	I0920 17:53:26.932536  333169 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0920 17:53:26.934855  333169 start.go:297] selected driver: docker
	I0920 17:53:26.934871  333169 start.go:901] validating driver "docker" against &{Name:functional-129075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-129075 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:53:26.934995  333169 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:53:26.937914  333169 out.go:201] 
	W0920 17:53:26.939882  333169 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 17:53:26.941867  333169 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-129075 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-129075 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-9rf2m" [6d726d79-0b84-417c-911b-72473199e691] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-9rf2m" [6d726d79-0b84-417c-911b-72473199e691] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004536953s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31642
functional_test.go:1675: http://192.168.49.2:31642: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-9rf2m

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31642
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [bee5aa71-5769-4994-9d5d-d736bd04a98c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003620315s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-129075 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-129075 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-129075 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-129075 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b525351c-2398-4740-93ca-b13fc6ed216f] Pending
helpers_test.go:344: "sp-pod" [b525351c-2398-4740-93ca-b13fc6ed216f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b525351c-2398-4740-93ca-b13fc6ed216f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004444786s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-129075 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-129075 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-129075 delete -f testdata/storage-provisioner/pod.yaml: (1.064362364s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-129075 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [021025bf-3ed8-48f2-9bb6-620f42d5d90a] Pending
helpers_test.go:344: "sp-pod" [021025bf-3ed8-48f2-9bb6-620f42d5d90a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00420099s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-129075 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.08s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh -n functional-129075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 cp functional-129075:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1308187060/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh -n functional-129075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh -n functional-129075 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/299684/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "sudo cat /etc/test/nested/copy/299684/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/299684.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "sudo cat /etc/ssl/certs/299684.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/299684.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "sudo cat /usr/share/ca-certificates/299684.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2996842.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "sudo cat /etc/ssl/certs/2996842.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2996842.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "sudo cat /usr/share/ca-certificates/2996842.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-129075 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-129075 ssh "sudo systemctl is-active docker": exit status 1 (273.138611ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-129075 ssh "sudo systemctl is-active crio": exit status 1 (276.395874ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-129075 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-129075 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-129075 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 330666: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-129075 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-129075 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-129075 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1e19ac1d-1fcd-4ff7-b975-90228a4fc862] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [1e19ac1d-1fcd-4ff7-b975-90228a4fc862] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00350862s
I0920 17:53:06.548385  299684 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-129075 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.57.121 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-129075 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-129075 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-129075 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-l9zfp" [81faaf29-93eb-4139-8cbc-446b910f7ebe] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-l9zfp" [81faaf29-93eb-4139-8cbc-446b910f7ebe] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004827352s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "431.298293ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "111.220177ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 service list -o json
functional_test.go:1494: Took "645.906977ms" to run "out/minikube-linux-arm64 -p functional-129075 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "426.902823ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "62.163495ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31977
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-129075 /tmp/TestFunctionalparallelMountCmdany-port1329687847/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726854804189444000" to /tmp/TestFunctionalparallelMountCmdany-port1329687847/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726854804189444000" to /tmp/TestFunctionalparallelMountCmdany-port1329687847/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726854804189444000" to /tmp/TestFunctionalparallelMountCmdany-port1329687847/001/test-1726854804189444000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-129075 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (436.520644ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 17:53:24.626274  299684 retry.go:31] will retry after 307.505492ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 17:53 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 17:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 17:53 test-1726854804189444000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh cat /mount-9p/test-1726854804189444000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-129075 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b34d139e-8ecc-44bb-961d-5ff12238bb31] Pending
helpers_test.go:344: "busybox-mount" [b34d139e-8ecc-44bb-961d-5ff12238bb31] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b34d139e-8ecc-44bb-961d-5ff12238bb31] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0920 17:53:30.437840  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [b34d139e-8ecc-44bb-961d-5ff12238bb31] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004464275s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-129075 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh stat /mount-9p/created-by-pod
E0920 17:53:31.719233  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-129075 /tmp/TestFunctionalparallelMountCmdany-port1329687847/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31977
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-129075 /tmp/TestFunctionalparallelMountCmdspecific-port513296488/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-129075 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (445.379046ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 17:53:32.925361  299684 retry.go:31] will retry after 458.079712ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-129075 /tmp/TestFunctionalparallelMountCmdspecific-port513296488/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "sudo umount -f /mount-9p"
E0920 17:53:34.281255  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-129075 ssh "sudo umount -f /mount-9p": exit status 1 (353.02742ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-129075 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-129075 /tmp/TestFunctionalparallelMountCmdspecific-port513296488/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-129075 /tmp/TestFunctionalparallelMountCmdVerifyCleanup275413795/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-129075 /tmp/TestFunctionalparallelMountCmdVerifyCleanup275413795/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-129075 /tmp/TestFunctionalparallelMountCmdVerifyCleanup275413795/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-129075 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-129075 /tmp/TestFunctionalparallelMountCmdVerifyCleanup275413795/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-129075 /tmp/TestFunctionalparallelMountCmdVerifyCleanup275413795/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-129075 /tmp/TestFunctionalparallelMountCmdVerifyCleanup275413795/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-129075 version -o=json --components: (1.258625532s)
--- PASS: TestFunctional/parallel/Version/components (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-129075 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-129075
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-129075
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-129075 image ls --format short --alsologtostderr:
I0920 17:53:44.249238  336029 out.go:345] Setting OutFile to fd 1 ...
I0920 17:53:44.249424  336029 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:53:44.249454  336029 out.go:358] Setting ErrFile to fd 2...
I0920 17:53:44.249477  336029 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:53:44.249773  336029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-294290/.minikube/bin
I0920 17:53:44.250444  336029 config.go:182] Loaded profile config "functional-129075": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 17:53:44.250606  336029 config.go:182] Loaded profile config "functional-129075": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 17:53:44.251289  336029 cli_runner.go:164] Run: docker container inspect functional-129075 --format={{.State.Status}}
I0920 17:53:44.274343  336029 ssh_runner.go:195] Run: systemctl --version
I0920 17:53:44.274394  336029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-129075
I0920 17:53:44.301099  336029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/functional-129075/id_rsa Username:docker}
I0920 17:53:44.396163  336029 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-129075 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| docker.io/kicbase/echo-server               | functional-129075  | sha256:ce2d2c | 2.17MB |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/minikube-local-cache-test | functional-129075  | sha256:758c0f | 992B   |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-129075 image ls --format table --alsologtostderr:
I0920 17:53:44.846342  336181 out.go:345] Setting OutFile to fd 1 ...
I0920 17:53:44.846510  336181 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:53:44.846540  336181 out.go:358] Setting ErrFile to fd 2...
I0920 17:53:44.846561  336181 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:53:44.846898  336181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-294290/.minikube/bin
I0920 17:53:44.847664  336181 config.go:182] Loaded profile config "functional-129075": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 17:53:44.847829  336181 config.go:182] Loaded profile config "functional-129075": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 17:53:44.848383  336181 cli_runner.go:164] Run: docker container inspect functional-129075 --format={{.State.Status}}
I0920 17:53:44.876180  336181 ssh_runner.go:195] Run: systemctl --version
I0920 17:53:44.876230  336181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-129075
I0920 17:53:44.896953  336181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/functional-129075/id_rsa Username:docker}
I0920 17:53:44.999803  336181 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-129075 image ls --format json --alsologtostderr:
[{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:758c0f567167ecb230d506f8471d768c84010669347c895ba2ea3fa1ce7b0c3f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-129075"],"size":"992"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:24a140c548c075e487e45d0ee73b1
aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-129075"],"size":"2173567"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/c
oredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"]
,"size":"71300"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89b
b3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-129075 image ls --format json --alsologtostderr:
I0920 17:53:44.560555  336097 out.go:345] Setting OutFile to fd 1 ...
I0920 17:53:44.560739  336097 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:53:44.560745  336097 out.go:358] Setting ErrFile to fd 2...
I0920 17:53:44.560750  336097 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:53:44.561462  336097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-294290/.minikube/bin
I0920 17:53:44.563642  336097 config.go:182] Loaded profile config "functional-129075": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 17:53:44.563816  336097 config.go:182] Loaded profile config "functional-129075": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 17:53:44.564335  336097 cli_runner.go:164] Run: docker container inspect functional-129075 --format={{.State.Status}}
I0920 17:53:44.585941  336097 ssh_runner.go:195] Run: systemctl --version
I0920 17:53:44.586007  336097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-129075
I0920 17:53:44.618764  336097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/functional-129075/id_rsa Username:docker}
I0920 17:53:44.716759  336097 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-129075 image ls --format yaml --alsologtostderr:
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-129075
size: "2173567"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:758c0f567167ecb230d506f8471d768c84010669347c895ba2ea3fa1ce7b0c3f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-129075
size: "992"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-129075 image ls --format yaml --alsologtostderr:
I0920 17:53:44.237924  336030 out.go:345] Setting OutFile to fd 1 ...
I0920 17:53:44.238127  336030 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:53:44.238158  336030 out.go:358] Setting ErrFile to fd 2...
I0920 17:53:44.238179  336030 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:53:44.238462  336030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-294290/.minikube/bin
I0920 17:53:44.239155  336030 config.go:182] Loaded profile config "functional-129075": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 17:53:44.245450  336030 config.go:182] Loaded profile config "functional-129075": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 17:53:44.246338  336030 cli_runner.go:164] Run: docker container inspect functional-129075 --format={{.State.Status}}
I0920 17:53:44.267682  336030 ssh_runner.go:195] Run: systemctl --version
I0920 17:53:44.267732  336030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-129075
I0920 17:53:44.286532  336030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/functional-129075/id_rsa Username:docker}
I0920 17:53:44.387785  336030 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-129075 ssh pgrep buildkitd: exit status 1 (328.828751ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image build -t localhost/my-image:functional-129075 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-129075 image build -t localhost/my-image:functional-129075 testdata/build --alsologtostderr: (3.424368937s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-129075 image build -t localhost/my-image:functional-129075 testdata/build --alsologtostderr:
I0920 17:53:44.874228  336186 out.go:345] Setting OutFile to fd 1 ...
I0920 17:53:44.875095  336186 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:53:44.875115  336186 out.go:358] Setting ErrFile to fd 2...
I0920 17:53:44.875122  336186 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:53:44.875435  336186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-294290/.minikube/bin
I0920 17:53:44.876247  336186 config.go:182] Loaded profile config "functional-129075": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 17:53:44.877418  336186 config.go:182] Loaded profile config "functional-129075": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 17:53:44.878014  336186 cli_runner.go:164] Run: docker container inspect functional-129075 --format={{.State.Status}}
I0920 17:53:44.899745  336186 ssh_runner.go:195] Run: systemctl --version
I0920 17:53:44.899798  336186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-129075
I0920 17:53:44.924480  336186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/functional-129075/id_rsa Username:docker}
I0920 17:53:45.057139  336186 build_images.go:161] Building image from path: /tmp/build.4166024728.tar
I0920 17:53:45.057225  336186 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 17:53:45.131753  336186 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4166024728.tar
I0920 17:53:45.164121  336186 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4166024728.tar: stat -c "%s %y" /var/lib/minikube/build/build.4166024728.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4166024728.tar': No such file or directory
I0920 17:53:45.164228  336186 ssh_runner.go:362] scp /tmp/build.4166024728.tar --> /var/lib/minikube/build/build.4166024728.tar (3072 bytes)
I0920 17:53:45.237472  336186 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4166024728
I0920 17:53:45.250738  336186 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4166024728 -xf /var/lib/minikube/build/build.4166024728.tar
I0920 17:53:45.267513  336186 containerd.go:394] Building image: /var/lib/minikube/build/build.4166024728
I0920 17:53:45.267670  336186 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4166024728 --local dockerfile=/var/lib/minikube/build/build.4166024728 --output type=image,name=localhost/my-image:functional-129075
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:5a784e064d0a7f1e5faac45015526d22203017e26ccae6d97049cd4713465330
#8 exporting manifest sha256:5a784e064d0a7f1e5faac45015526d22203017e26ccae6d97049cd4713465330 0.0s done
#8 exporting config sha256:468cfc287de40efa64db6f23ed53da810455f2a70595cfcbedaff0cec108b74b 0.0s done
#8 naming to localhost/my-image:functional-129075 done
#8 DONE 0.1s
I0920 17:53:48.179308  336186 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4166024728 --local dockerfile=/var/lib/minikube/build/build.4166024728 --output type=image,name=localhost/my-image:functional-129075: (2.911611572s)
I0920 17:53:48.179383  336186 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4166024728
I0920 17:53:48.188871  336186 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4166024728.tar
I0920 17:53:48.198944  336186 build_images.go:217] Built localhost/my-image:functional-129075 from /tmp/build.4166024728.tar
I0920 17:53:48.198974  336186 build_images.go:133] succeeded building to: functional-129075
I0920 17:53:48.198980  336186 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-129075
2024/09/20 17:53:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image load --daemon kicbase/echo-server:functional-129075 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-129075 image load --daemon kicbase/echo-server:functional-129075 --alsologtostderr: (1.096062023s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image load --daemon kicbase/echo-server:functional-129075 --alsologtostderr
E0920 17:53:39.405365  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-129075 image load --daemon kicbase/echo-server:functional-129075 --alsologtostderr: (1.105957368s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-129075
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image load --daemon kicbase/echo-server:functional-129075 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image save kicbase/echo-server:functional-129075 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image rm kicbase/echo-server:functional-129075 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-129075
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-129075 image save --daemon kicbase/echo-server:functional-129075 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-129075
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-129075
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-129075
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-129075
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (121.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-409001 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0920 17:54:10.129180  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:54:51.091348  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-409001 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m0.67368721s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (121.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (36.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- rollout status deployment/busybox
E0920 17:56:13.018587  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-409001 -- rollout status deployment/busybox: (33.169162464s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- exec busybox-7dff88458-6vtn8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- exec busybox-7dff88458-mlmkk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- exec busybox-7dff88458-q5v6n -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- exec busybox-7dff88458-6vtn8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- exec busybox-7dff88458-mlmkk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- exec busybox-7dff88458-q5v6n -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- exec busybox-7dff88458-6vtn8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- exec busybox-7dff88458-mlmkk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- exec busybox-7dff88458-q5v6n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (36.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- exec busybox-7dff88458-6vtn8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- exec busybox-7dff88458-6vtn8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- exec busybox-7dff88458-mlmkk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- exec busybox-7dff88458-mlmkk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- exec busybox-7dff88458-q5v6n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-409001 -- exec busybox-7dff88458-q5v6n -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (22.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-409001 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-409001 -v=7 --alsologtostderr: (21.236137343s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (22.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-409001 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.00723603s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp testdata/cp-test.txt ha-409001:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp ha-409001:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1634689532/001/cp-test_ha-409001.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp ha-409001:/home/docker/cp-test.txt ha-409001-m02:/home/docker/cp-test_ha-409001_ha-409001-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m02 "sudo cat /home/docker/cp-test_ha-409001_ha-409001-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp ha-409001:/home/docker/cp-test.txt ha-409001-m03:/home/docker/cp-test_ha-409001_ha-409001-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m03 "sudo cat /home/docker/cp-test_ha-409001_ha-409001-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp ha-409001:/home/docker/cp-test.txt ha-409001-m04:/home/docker/cp-test_ha-409001_ha-409001-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m04 "sudo cat /home/docker/cp-test_ha-409001_ha-409001-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp testdata/cp-test.txt ha-409001-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp ha-409001-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1634689532/001/cp-test_ha-409001-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp ha-409001-m02:/home/docker/cp-test.txt ha-409001:/home/docker/cp-test_ha-409001-m02_ha-409001.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001 "sudo cat /home/docker/cp-test_ha-409001-m02_ha-409001.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp ha-409001-m02:/home/docker/cp-test.txt ha-409001-m03:/home/docker/cp-test_ha-409001-m02_ha-409001-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m03 "sudo cat /home/docker/cp-test_ha-409001-m02_ha-409001-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp ha-409001-m02:/home/docker/cp-test.txt ha-409001-m04:/home/docker/cp-test_ha-409001-m02_ha-409001-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m04 "sudo cat /home/docker/cp-test_ha-409001-m02_ha-409001-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp testdata/cp-test.txt ha-409001-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp ha-409001-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1634689532/001/cp-test_ha-409001-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp ha-409001-m03:/home/docker/cp-test.txt ha-409001:/home/docker/cp-test_ha-409001-m03_ha-409001.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001 "sudo cat /home/docker/cp-test_ha-409001-m03_ha-409001.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp ha-409001-m03:/home/docker/cp-test.txt ha-409001-m02:/home/docker/cp-test_ha-409001-m03_ha-409001-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m02 "sudo cat /home/docker/cp-test_ha-409001-m03_ha-409001-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp ha-409001-m03:/home/docker/cp-test.txt ha-409001-m04:/home/docker/cp-test_ha-409001-m03_ha-409001-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m04 "sudo cat /home/docker/cp-test_ha-409001-m03_ha-409001-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp testdata/cp-test.txt ha-409001-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp ha-409001-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1634689532/001/cp-test_ha-409001-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp ha-409001-m04:/home/docker/cp-test.txt ha-409001:/home/docker/cp-test_ha-409001-m04_ha-409001.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001 "sudo cat /home/docker/cp-test_ha-409001-m04_ha-409001.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp ha-409001-m04:/home/docker/cp-test.txt ha-409001-m02:/home/docker/cp-test_ha-409001-m04_ha-409001-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m02 "sudo cat /home/docker/cp-test_ha-409001-m04_ha-409001-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 cp ha-409001-m04:/home/docker/cp-test.txt ha-409001-m03:/home/docker/cp-test_ha-409001-m04_ha-409001-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 ssh -n ha-409001-m03 "sudo cat /home/docker/cp-test_ha-409001-m04_ha-409001-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-409001 node stop m02 -v=7 --alsologtostderr: (12.136232547s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-409001 status -v=7 --alsologtostderr: exit status 7 (761.091451ms)

                                                
                                                
-- stdout --
	ha-409001
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-409001-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-409001-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-409001-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:57:24.929398  352432 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:57:24.929605  352432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:57:24.929635  352432 out.go:358] Setting ErrFile to fd 2...
	I0920 17:57:24.929642  352432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:57:24.929973  352432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-294290/.minikube/bin
	I0920 17:57:24.930276  352432 out.go:352] Setting JSON to false
	I0920 17:57:24.930313  352432 mustload.go:65] Loading cluster: ha-409001
	I0920 17:57:24.930366  352432 notify.go:220] Checking for updates...
	I0920 17:57:24.930855  352432 config.go:182] Loaded profile config "ha-409001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 17:57:24.930874  352432 status.go:174] checking status of ha-409001 ...
	I0920 17:57:24.931700  352432 cli_runner.go:164] Run: docker container inspect ha-409001 --format={{.State.Status}}
	I0920 17:57:24.954160  352432 status.go:364] ha-409001 host status = "Running" (err=<nil>)
	I0920 17:57:24.954186  352432 host.go:66] Checking if "ha-409001" exists ...
	I0920 17:57:24.958736  352432 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409001
	I0920 17:57:25.004328  352432 host.go:66] Checking if "ha-409001" exists ...
	I0920 17:57:25.004659  352432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:57:25.004706  352432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409001
	I0920 17:57:25.028884  352432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/ha-409001/id_rsa Username:docker}
	I0920 17:57:25.124895  352432 ssh_runner.go:195] Run: systemctl --version
	I0920 17:57:25.129626  352432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:57:25.142205  352432 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:57:25.209119  352432 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-20 17:57:25.197807255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:57:25.209725  352432 kubeconfig.go:125] found "ha-409001" server: "https://192.168.49.254:8443"
	I0920 17:57:25.209761  352432 api_server.go:166] Checking apiserver status ...
	I0920 17:57:25.209811  352432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:57:25.222027  352432 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1473/cgroup
	I0920 17:57:25.233161  352432 api_server.go:182] apiserver freezer: "7:freezer:/docker/ceda179f575b68ff3eef96c4fb1580b7d5351907111c5f440ba21f37a45f19d9/kubepods/burstable/pod6b4ca602b20a1db3d41dc2e2af16a1fd/1f3dc4c0e1ed21fb2ebf9f5943f00d950411a7622671fbd6daa072b7908af35a"
	I0920 17:57:25.233232  352432 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ceda179f575b68ff3eef96c4fb1580b7d5351907111c5f440ba21f37a45f19d9/kubepods/burstable/pod6b4ca602b20a1db3d41dc2e2af16a1fd/1f3dc4c0e1ed21fb2ebf9f5943f00d950411a7622671fbd6daa072b7908af35a/freezer.state
	I0920 17:57:25.243688  352432 api_server.go:204] freezer state: "THAWED"
	I0920 17:57:25.243720  352432 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 17:57:25.253345  352432 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 17:57:25.253372  352432 status.go:456] ha-409001 apiserver status = Running (err=<nil>)
	I0920 17:57:25.253383  352432 status.go:176] ha-409001 status: &{Name:ha-409001 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:57:25.253426  352432 status.go:174] checking status of ha-409001-m02 ...
	I0920 17:57:25.253740  352432 cli_runner.go:164] Run: docker container inspect ha-409001-m02 --format={{.State.Status}}
	I0920 17:57:25.270558  352432 status.go:364] ha-409001-m02 host status = "Stopped" (err=<nil>)
	I0920 17:57:25.270583  352432 status.go:377] host is not running, skipping remaining checks
	I0920 17:57:25.270590  352432 status.go:176] ha-409001-m02 status: &{Name:ha-409001-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:57:25.270615  352432 status.go:174] checking status of ha-409001-m03 ...
	I0920 17:57:25.270933  352432 cli_runner.go:164] Run: docker container inspect ha-409001-m03 --format={{.State.Status}}
	I0920 17:57:25.287934  352432 status.go:364] ha-409001-m03 host status = "Running" (err=<nil>)
	I0920 17:57:25.287958  352432 host.go:66] Checking if "ha-409001-m03" exists ...
	I0920 17:57:25.288259  352432 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409001-m03
	I0920 17:57:25.306165  352432 host.go:66] Checking if "ha-409001-m03" exists ...
	I0920 17:57:25.306490  352432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:57:25.306530  352432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409001-m03
	I0920 17:57:25.324102  352432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/ha-409001-m03/id_rsa Username:docker}
	I0920 17:57:25.416555  352432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:57:25.428368  352432 kubeconfig.go:125] found "ha-409001" server: "https://192.168.49.254:8443"
	I0920 17:57:25.428403  352432 api_server.go:166] Checking apiserver status ...
	I0920 17:57:25.428453  352432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:57:25.439497  352432 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1351/cgroup
	I0920 17:57:25.449416  352432 api_server.go:182] apiserver freezer: "7:freezer:/docker/fe4b7b4192e4d23535ac1eb2a461cccfde6f275a2b5b275c638fdbdf58e88022/kubepods/burstable/pod8f1edee9b6d93fea1058c2395637e6ba/f7fdf6515093e8fe7119ad50e8e84b77a10aa982d85270db45c0bc47765c211f"
	I0920 17:57:25.449492  352432 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fe4b7b4192e4d23535ac1eb2a461cccfde6f275a2b5b275c638fdbdf58e88022/kubepods/burstable/pod8f1edee9b6d93fea1058c2395637e6ba/f7fdf6515093e8fe7119ad50e8e84b77a10aa982d85270db45c0bc47765c211f/freezer.state
	I0920 17:57:25.459605  352432 api_server.go:204] freezer state: "THAWED"
	I0920 17:57:25.459633  352432 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 17:57:25.467528  352432 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 17:57:25.467561  352432 status.go:456] ha-409001-m03 apiserver status = Running (err=<nil>)
	I0920 17:57:25.467570  352432 status.go:176] ha-409001-m03 status: &{Name:ha-409001-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:57:25.467586  352432 status.go:174] checking status of ha-409001-m04 ...
	I0920 17:57:25.467891  352432 cli_runner.go:164] Run: docker container inspect ha-409001-m04 --format={{.State.Status}}
	I0920 17:57:25.485374  352432 status.go:364] ha-409001-m04 host status = "Running" (err=<nil>)
	I0920 17:57:25.485405  352432 host.go:66] Checking if "ha-409001-m04" exists ...
	I0920 17:57:25.485760  352432 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409001-m04
	I0920 17:57:25.503782  352432 host.go:66] Checking if "ha-409001-m04" exists ...
	I0920 17:57:25.504155  352432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:57:25.504206  352432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409001-m04
	I0920 17:57:25.523119  352432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/ha-409001-m04/id_rsa Username:docker}
	I0920 17:57:25.616613  352432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:57:25.629323  352432 status.go:176] ha-409001-m04 status: &{Name:ha-409001-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-409001 node start m02 -v=7 --alsologtostderr: (16.968314562s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-409001 status -v=7 --alsologtostderr: (1.004260253s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.388099202s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (153.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-409001 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-409001 -v=7 --alsologtostderr
E0920 17:57:57.113909  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:57.120391  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:57.131776  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:57.153195  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:57.194639  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:57.276072  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:57.437576  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:57.759444  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:58.401321  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:59.683153  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:02.244583  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:07.366161  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:17.608047  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-409001 -v=7 --alsologtostderr: (37.239561455s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-409001 --wait=true -v=7 --alsologtostderr
E0920 17:58:29.144476  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:38.090171  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:56.860347  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:19.052375  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-409001 --wait=true -v=7 --alsologtostderr: (1m55.808800107s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-409001
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (153.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-409001 node delete m03 -v=7 --alsologtostderr: (9.602178465s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 stop -v=7 --alsologtostderr
E0920 18:00:40.974544  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-409001 stop -v=7 --alsologtostderr: (35.915232131s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-409001 status -v=7 --alsologtostderr: exit status 7 (109.973525ms)

                                                
                                                
-- stdout --
	ha-409001
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-409001-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-409001-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:01:06.407186  366843 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:01:06.407355  366843 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:01:06.407383  366843 out.go:358] Setting ErrFile to fd 2...
	I0920 18:01:06.407407  366843 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:01:06.407681  366843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-294290/.minikube/bin
	I0920 18:01:06.407895  366843 out.go:352] Setting JSON to false
	I0920 18:01:06.407955  366843 mustload.go:65] Loading cluster: ha-409001
	I0920 18:01:06.408034  366843 notify.go:220] Checking for updates...
	I0920 18:01:06.408443  366843 config.go:182] Loaded profile config "ha-409001": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 18:01:06.408481  366843 status.go:174] checking status of ha-409001 ...
	I0920 18:01:06.409142  366843 cli_runner.go:164] Run: docker container inspect ha-409001 --format={{.State.Status}}
	I0920 18:01:06.425625  366843 status.go:364] ha-409001 host status = "Stopped" (err=<nil>)
	I0920 18:01:06.425644  366843 status.go:377] host is not running, skipping remaining checks
	I0920 18:01:06.425651  366843 status.go:176] ha-409001 status: &{Name:ha-409001 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:01:06.425689  366843 status.go:174] checking status of ha-409001-m02 ...
	I0920 18:01:06.425995  366843 cli_runner.go:164] Run: docker container inspect ha-409001-m02 --format={{.State.Status}}
	I0920 18:01:06.446488  366843 status.go:364] ha-409001-m02 host status = "Stopped" (err=<nil>)
	I0920 18:01:06.446507  366843 status.go:377] host is not running, skipping remaining checks
	I0920 18:01:06.446513  366843 status.go:176] ha-409001-m02 status: &{Name:ha-409001-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:01:06.446531  366843 status.go:174] checking status of ha-409001-m04 ...
	I0920 18:01:06.446825  366843 cli_runner.go:164] Run: docker container inspect ha-409001-m04 --format={{.State.Status}}
	I0920 18:01:06.468563  366843 status.go:364] ha-409001-m04 host status = "Stopped" (err=<nil>)
	I0920 18:01:06.468581  366843 status.go:377] host is not running, skipping remaining checks
	I0920 18:01:06.468588  366843 status.go:176] ha-409001-m04 status: &{Name:ha-409001-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (79.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-409001 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-409001 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.238333966s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (79.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (46.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-409001 --control-plane -v=7 --alsologtostderr
E0920 18:02:57.112668  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-409001 --control-plane -v=7 --alsologtostderr: (45.44919632s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-409001 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (46.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.027475246s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.5s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-690059 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0920 18:03:24.815970  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:03:29.144913  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-690059 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (53.5021079s)
--- PASS: TestJSONOutput/start/Command (53.50s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-690059 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-690059 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.27s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-690059 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-690059 --output=json --user=testUser: (1.266045365s)
--- PASS: TestJSONOutput/stop/Command (1.27s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-145277 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-145277 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (86.177136ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7abb0a32-f2c9-4976-bb67-bd1eb001ab56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-145277] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d4224de7-ab62-4323-a576-3558b6e20bae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"d5c98539-dcdd-49a7-a14f-f2dc06d9a63f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"64977632-55b8-4939-9e8a-4cfde25cf9e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19672-294290/kubeconfig"}}
	{"specversion":"1.0","id":"65fc1ef2-b8b1-49ae-beb2-fb7973ebde48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-294290/.minikube"}}
	{"specversion":"1.0","id":"88de0647-c376-45af-bdb5-0aca9707d598","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5a5832e6-71c4-4766-b901-4bcc30f98aa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8ebbb36c-8c7f-4608-abff-35de4e107d4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-145277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-145277
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.68s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-832037 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-832037 --network=: (34.546233509s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-832037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-832037
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-832037: (2.106506432s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.68s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.18s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-323106 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-323106 --network=bridge: (32.131496861s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-323106" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-323106
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-323106: (2.021521297s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.18s)

                                                
                                    
x
+
TestKicExistingNetwork (33.68s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0920 18:05:33.169148  299684 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0920 18:05:33.184588  299684 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0920 18:05:33.184682  299684 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0920 18:05:33.184706  299684 cli_runner.go:164] Run: docker network inspect existing-network
W0920 18:05:33.200594  299684 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0920 18:05:33.200625  299684 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0920 18:05:33.200650  299684 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0920 18:05:33.200754  299684 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 18:05:33.218106  299684 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-758fc8c66451 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:12:0e:b8:19} reservation:<nil>}
I0920 18:05:33.218521  299684 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400138cf30}
I0920 18:05:33.218547  299684 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0920 18:05:33.218617  299684 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0920 18:05:33.289730  299684 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-755862 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-755862 --network=existing-network: (31.579840045s)
helpers_test.go:175: Cleaning up "existing-network-755862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-755862
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-755862: (1.952903251s)
I0920 18:06:06.839004  299684 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.68s)

                                                
                                    
x
+
TestKicCustomSubnet (35.67s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-633806 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-633806 --subnet=192.168.60.0/24: (33.558668082s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-633806 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-633806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-633806
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-633806: (2.093529423s)
--- PASS: TestKicCustomSubnet (35.67s)

                                                
                                    
x
+
TestKicStaticIP (37.4s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-943651 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-943651 --static-ip=192.168.200.200: (35.214758085s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-943651 ip
helpers_test.go:175: Cleaning up "static-ip-943651" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-943651
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-943651: (2.034917041s)
--- PASS: TestKicStaticIP (37.40s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.76s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-169815 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-169815 --driver=docker  --container-runtime=containerd: (33.115636135s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-172564 --driver=docker  --container-runtime=containerd
E0920 18:07:57.112004  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-172564 --driver=docker  --container-runtime=containerd: (31.337489799s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-169815
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-172564
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-172564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-172564
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-172564: (2.002708935s)
helpers_test.go:175: Cleaning up "first-169815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-169815
E0920 18:08:29.144948  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-169815: (1.996606718s)
--- PASS: TestMinikubeProfile (69.76s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-515933 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-515933 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.515563106s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-515933 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-518246 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-518246 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.822951863s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-518246 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-515933 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-515933 --alsologtostderr -v=5: (1.696193596s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-518246 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-518246
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-518246: (1.201109936s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.44s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-518246
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-518246: (6.438217302s)
--- PASS: TestMountStart/serial/RestartStopped (7.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-518246 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (68.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-541172 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0920 18:09:52.222342  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-541172 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m7.949995985s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (68.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (15.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541172 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541172 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-541172 -- rollout status deployment/busybox: (13.263470654s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541172 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541172 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541172 -- exec busybox-7dff88458-5mq8d -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541172 -- exec busybox-7dff88458-lm9j7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541172 -- exec busybox-7dff88458-5mq8d -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541172 -- exec busybox-7dff88458-lm9j7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541172 -- exec busybox-7dff88458-5mq8d -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541172 -- exec busybox-7dff88458-lm9j7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (15.05s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541172 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541172 -- exec busybox-7dff88458-5mq8d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541172 -- exec busybox-7dff88458-5mq8d -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541172 -- exec busybox-7dff88458-lm9j7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-541172 -- exec busybox-7dff88458-lm9j7 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-541172 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-541172 -v 3 --alsologtostderr: (16.522249151s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.18s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-541172 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 cp testdata/cp-test.txt multinode-541172:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 cp multinode-541172:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3287424471/001/cp-test_multinode-541172.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 cp multinode-541172:/home/docker/cp-test.txt multinode-541172-m02:/home/docker/cp-test_multinode-541172_multinode-541172-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172-m02 "sudo cat /home/docker/cp-test_multinode-541172_multinode-541172-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 cp multinode-541172:/home/docker/cp-test.txt multinode-541172-m03:/home/docker/cp-test_multinode-541172_multinode-541172-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172-m03 "sudo cat /home/docker/cp-test_multinode-541172_multinode-541172-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 cp testdata/cp-test.txt multinode-541172-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 cp multinode-541172-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3287424471/001/cp-test_multinode-541172-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 cp multinode-541172-m02:/home/docker/cp-test.txt multinode-541172:/home/docker/cp-test_multinode-541172-m02_multinode-541172.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172 "sudo cat /home/docker/cp-test_multinode-541172-m02_multinode-541172.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 cp multinode-541172-m02:/home/docker/cp-test.txt multinode-541172-m03:/home/docker/cp-test_multinode-541172-m02_multinode-541172-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172-m03 "sudo cat /home/docker/cp-test_multinode-541172-m02_multinode-541172-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 cp testdata/cp-test.txt multinode-541172-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 cp multinode-541172-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3287424471/001/cp-test_multinode-541172-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 cp multinode-541172-m03:/home/docker/cp-test.txt multinode-541172:/home/docker/cp-test_multinode-541172-m03_multinode-541172.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172 "sudo cat /home/docker/cp-test_multinode-541172-m03_multinode-541172.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 cp multinode-541172-m03:/home/docker/cp-test.txt multinode-541172-m02:/home/docker/cp-test_multinode-541172-m03_multinode-541172-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 ssh -n multinode-541172-m02 "sudo cat /home/docker/cp-test_multinode-541172-m03_multinode-541172-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-541172 node stop m03: (1.205040263s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-541172 status: exit status 7 (517.134824ms)

                                                
                                                
-- stdout --
	multinode-541172
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-541172-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-541172-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-541172 status --alsologtostderr: exit status 7 (500.910404ms)

                                                
                                                
-- stdout --
	multinode-541172
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-541172-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-541172-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:10:50.500075  420121 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:10:50.500294  420121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:10:50.500304  420121 out.go:358] Setting ErrFile to fd 2...
	I0920 18:10:50.500310  420121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:10:50.500606  420121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-294290/.minikube/bin
	I0920 18:10:50.500854  420121 out.go:352] Setting JSON to false
	I0920 18:10:50.500905  420121 mustload.go:65] Loading cluster: multinode-541172
	I0920 18:10:50.501000  420121 notify.go:220] Checking for updates...
	I0920 18:10:50.501491  420121 config.go:182] Loaded profile config "multinode-541172": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 18:10:50.501552  420121 status.go:174] checking status of multinode-541172 ...
	I0920 18:10:50.502314  420121 cli_runner.go:164] Run: docker container inspect multinode-541172 --format={{.State.Status}}
	I0920 18:10:50.525752  420121 status.go:364] multinode-541172 host status = "Running" (err=<nil>)
	I0920 18:10:50.525782  420121 host.go:66] Checking if "multinode-541172" exists ...
	I0920 18:10:50.526094  420121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-541172
	I0920 18:10:50.554336  420121 host.go:66] Checking if "multinode-541172" exists ...
	I0920 18:10:50.554705  420121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:10:50.554773  420121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-541172
	I0920 18:10:50.578126  420121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33279 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/multinode-541172/id_rsa Username:docker}
	I0920 18:10:50.672550  420121 ssh_runner.go:195] Run: systemctl --version
	I0920 18:10:50.676961  420121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:10:50.688560  420121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:10:50.740414  420121 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-20 18:10:50.730307327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 18:10:50.741003  420121 kubeconfig.go:125] found "multinode-541172" server: "https://192.168.67.2:8443"
	I0920 18:10:50.741048  420121 api_server.go:166] Checking apiserver status ...
	I0920 18:10:50.741098  420121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:10:50.752217  420121 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1446/cgroup
	I0920 18:10:50.762759  420121 api_server.go:182] apiserver freezer: "7:freezer:/docker/a3109f6bdf0b0a9207b38b767b6ee4e97ff501e743c84746f77e162ec50ceadd/kubepods/burstable/podd2135ca6be6369fab8e31db8fd9c2642/837790082e48631340e8126cc001a4204ed3fa9907133bee74fd78420c13952c"
	I0920 18:10:50.762842  420121 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a3109f6bdf0b0a9207b38b767b6ee4e97ff501e743c84746f77e162ec50ceadd/kubepods/burstable/podd2135ca6be6369fab8e31db8fd9c2642/837790082e48631340e8126cc001a4204ed3fa9907133bee74fd78420c13952c/freezer.state
	I0920 18:10:50.771578  420121 api_server.go:204] freezer state: "THAWED"
	I0920 18:10:50.771605  420121 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0920 18:10:50.780359  420121 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0920 18:10:50.780386  420121 status.go:456] multinode-541172 apiserver status = Running (err=<nil>)
	I0920 18:10:50.780396  420121 status.go:176] multinode-541172 status: &{Name:multinode-541172 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:10:50.780413  420121 status.go:174] checking status of multinode-541172-m02 ...
	I0920 18:10:50.780721  420121 cli_runner.go:164] Run: docker container inspect multinode-541172-m02 --format={{.State.Status}}
	I0920 18:10:50.796866  420121 status.go:364] multinode-541172-m02 host status = "Running" (err=<nil>)
	I0920 18:10:50.796898  420121 host.go:66] Checking if "multinode-541172-m02" exists ...
	I0920 18:10:50.797207  420121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-541172-m02
	I0920 18:10:50.814095  420121 host.go:66] Checking if "multinode-541172-m02" exists ...
	I0920 18:10:50.814395  420121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:10:50.814441  420121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-541172-m02
	I0920 18:10:50.831186  420121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33284 SSHKeyPath:/home/jenkins/minikube-integration/19672-294290/.minikube/machines/multinode-541172-m02/id_rsa Username:docker}
	I0920 18:10:50.924038  420121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:10:50.935857  420121 status.go:176] multinode-541172-m02 status: &{Name:multinode-541172-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:10:50.935893  420121 status.go:174] checking status of multinode-541172-m03 ...
	I0920 18:10:50.936230  420121 cli_runner.go:164] Run: docker container inspect multinode-541172-m03 --format={{.State.Status}}
	I0920 18:10:50.952129  420121 status.go:364] multinode-541172-m03 host status = "Stopped" (err=<nil>)
	I0920 18:10:50.952150  420121 status.go:377] host is not running, skipping remaining checks
	I0920 18:10:50.952157  420121 status.go:176] multinode-541172-m03 status: &{Name:multinode-541172-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-541172 node start m03 -v=7 --alsologtostderr: (8.636972249s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (93.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-541172
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-541172
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-541172: (25.095555953s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-541172 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-541172 --wait=true -v=8 --alsologtostderr: (1m7.838387172s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-541172
--- PASS: TestMultiNode/serial/RestartKeepsNodes (93.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-541172 node delete m03: (4.838670813s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 stop
E0920 18:12:57.113381  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-541172 stop: (23.844044047s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-541172 status: exit status 7 (104.966147ms)

                                                
                                                
-- stdout --
	multinode-541172
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-541172-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-541172 status --alsologtostderr: exit status 7 (104.33566ms)

                                                
                                                
-- stdout --
	multinode-541172
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-541172-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:13:03.180687  428626 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:13:03.181058  428626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:13:03.181072  428626 out.go:358] Setting ErrFile to fd 2...
	I0920 18:13:03.181081  428626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:13:03.181338  428626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-294290/.minikube/bin
	I0920 18:13:03.181526  428626 out.go:352] Setting JSON to false
	I0920 18:13:03.181560  428626 mustload.go:65] Loading cluster: multinode-541172
	I0920 18:13:03.182013  428626 config.go:182] Loaded profile config "multinode-541172": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 18:13:03.182036  428626 status.go:174] checking status of multinode-541172 ...
	I0920 18:13:03.182609  428626 cli_runner.go:164] Run: docker container inspect multinode-541172 --format={{.State.Status}}
	I0920 18:13:03.183104  428626 notify.go:220] Checking for updates...
	I0920 18:13:03.203098  428626 status.go:364] multinode-541172 host status = "Stopped" (err=<nil>)
	I0920 18:13:03.203121  428626 status.go:377] host is not running, skipping remaining checks
	I0920 18:13:03.203129  428626 status.go:176] multinode-541172 status: &{Name:multinode-541172 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:13:03.203155  428626 status.go:174] checking status of multinode-541172-m02 ...
	I0920 18:13:03.203471  428626 cli_runner.go:164] Run: docker container inspect multinode-541172-m02 --format={{.State.Status}}
	I0920 18:13:03.231208  428626 status.go:364] multinode-541172-m02 host status = "Stopped" (err=<nil>)
	I0920 18:13:03.231231  428626 status.go:377] host is not running, skipping remaining checks
	I0920 18:13:03.231239  428626 status.go:176] multinode-541172-m02 status: &{Name:multinode-541172-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-541172 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0920 18:13:29.144571  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-541172 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.998728578s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-541172 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.66s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-541172
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-541172-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-541172-m02 --driver=docker  --container-runtime=containerd: exit status 14 (90.580524ms)

                                                
                                                
-- stdout --
	* [multinode-541172-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-294290/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-294290/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-541172-m02' is duplicated with machine name 'multinode-541172-m02' in profile 'multinode-541172'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-541172-m03 --driver=docker  --container-runtime=containerd
E0920 18:14:20.177277  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-541172-m03 --driver=docker  --container-runtime=containerd: (34.33711137s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-541172
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-541172: exit status 80 (349.160447ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-541172 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-541172-m03 already exists in multinode-541172-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-541172-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-541172-m03: (2.045092444s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.88s)

                                                
                                    
x
+
TestPreload (126.95s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-892675 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-892675 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m29.801916962s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-892675 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-892675 image pull gcr.io/k8s-minikube/busybox: (1.886404321s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-892675
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-892675: (12.095371125s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-892675 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-892675 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.375858904s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-892675 image list
helpers_test.go:175: Cleaning up "test-preload-892675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-892675
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-892675: (2.425147031s)
--- PASS: TestPreload (126.95s)

                                                
                                    
x
+
TestScheduledStopUnix (109s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-844574 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-844574 --memory=2048 --driver=docker  --container-runtime=containerd: (32.981236342s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-844574 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-844574 -n scheduled-stop-844574
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-844574 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 18:17:15.164889  299684 retry.go:31] will retry after 141.576µs: open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/scheduled-stop-844574/pid: no such file or directory
I0920 18:17:15.165451  299684 retry.go:31] will retry after 79.156µs: open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/scheduled-stop-844574/pid: no such file or directory
I0920 18:17:15.166540  299684 retry.go:31] will retry after 312.735µs: open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/scheduled-stop-844574/pid: no such file or directory
I0920 18:17:15.167691  299684 retry.go:31] will retry after 313.337µs: open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/scheduled-stop-844574/pid: no such file or directory
I0920 18:17:15.168827  299684 retry.go:31] will retry after 382.22µs: open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/scheduled-stop-844574/pid: no such file or directory
I0920 18:17:15.169984  299684 retry.go:31] will retry after 408.985µs: open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/scheduled-stop-844574/pid: no such file or directory
I0920 18:17:15.171148  299684 retry.go:31] will retry after 1.400419ms: open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/scheduled-stop-844574/pid: no such file or directory
I0920 18:17:15.173400  299684 retry.go:31] will retry after 1.50028ms: open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/scheduled-stop-844574/pid: no such file or directory
I0920 18:17:15.175638  299684 retry.go:31] will retry after 1.708113ms: open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/scheduled-stop-844574/pid: no such file or directory
I0920 18:17:15.177868  299684 retry.go:31] will retry after 2.249674ms: open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/scheduled-stop-844574/pid: no such file or directory
I0920 18:17:15.181118  299684 retry.go:31] will retry after 3.435995ms: open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/scheduled-stop-844574/pid: no such file or directory
I0920 18:17:15.185624  299684 retry.go:31] will retry after 8.730923ms: open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/scheduled-stop-844574/pid: no such file or directory
I0920 18:17:15.194897  299684 retry.go:31] will retry after 11.830652ms: open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/scheduled-stop-844574/pid: no such file or directory
I0920 18:17:15.207218  299684 retry.go:31] will retry after 28.083133ms: open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/scheduled-stop-844574/pid: no such file or directory
I0920 18:17:15.236398  299684 retry.go:31] will retry after 26.319496ms: open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/scheduled-stop-844574/pid: no such file or directory
I0920 18:17:15.263627  299684 retry.go:31] will retry after 44.689816ms: open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/scheduled-stop-844574/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-844574 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-844574 -n scheduled-stop-844574
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-844574
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-844574 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0920 18:17:57.112068  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-844574
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-844574: exit status 7 (64.090934ms)

                                                
                                                
-- stdout --
	scheduled-stop-844574
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-844574 -n scheduled-stop-844574
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-844574 -n scheduled-stop-844574: exit status 7 (67.023346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-844574" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-844574
E0920 18:18:29.144787  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-844574: (4.45949051s)
--- PASS: TestScheduledStopUnix (109.00s)

                                                
                                    
x
+
TestInsufficientStorage (10.04s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-395421 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-395421 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.597863058s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"38db3326-0fe8-4ff6-9888-68d21de34004","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-395421] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b9b8971-5710-4bff-802d-7b624229a86d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"1d2992eb-3374-476e-ac6e-5d70df5f3c45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3bc10b39-411c-4475-96ac-bba9978ed1c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19672-294290/kubeconfig"}}
	{"specversion":"1.0","id":"b47a1d9d-dc82-4b6b-86d4-5aef5949b12a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-294290/.minikube"}}
	{"specversion":"1.0","id":"3d0fe987-8b2d-46ea-a62e-1d3b1f1af995","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1f6dbc4d-0a6f-4fb3-9347-26858efc520f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0fcc1a12-3e18-4b14-96df-f24a87de5d4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d10258d9-8b8d-48e5-8982-fc755787b1b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e43d2b04-66c5-4f8b-adfb-648313455f31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c2eb945-e2df-41b9-b985-d77313d85194","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"aa13b1b1-0007-4a06-b4c7-85ba7d87aa59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-395421\" primary control-plane node in \"insufficient-storage-395421\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7403d0df-d5e8-4805-8c17-38d5deb58eae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726784731-19672 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"18b522d8-2fa0-4b9c-99af-5914cf7b0b8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f352f47-0aea-47bb-b823-5d5b5880ee08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-395421 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-395421 --output=json --layout=cluster: exit status 7 (269.30192ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-395421","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-395421","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:18:38.497213  447433 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-395421" does not appear in /home/jenkins/minikube-integration/19672-294290/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-395421 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-395421 --output=json --layout=cluster: exit status 7 (272.835356ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-395421","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-395421","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:18:38.772072  447495 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-395421" does not appear in /home/jenkins/minikube-integration/19672-294290/kubeconfig
	E0920 18:18:38.782237  447495 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/insufficient-storage-395421/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-395421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-395421
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-395421: (1.893944531s)
--- PASS: TestInsufficientStorage (10.04s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (84.06s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0920 18:23:29.144717  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.119255580 start -p running-upgrade-330158 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.119255580 start -p running-upgrade-330158 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.785233143s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-330158 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-330158 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (32.969462482s)
helpers_test.go:175: Cleaning up "running-upgrade-330158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-330158
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-330158: (3.006806849s)
--- PASS: TestRunningBinaryUpgrade (84.06s)

                                                
                                    
x
+
TestKubernetesUpgrade (351.96s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-851119 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-851119 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.295968717s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-851119
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-851119: (1.222069301s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-851119 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-851119 status --format={{.Host}}: exit status 7 (74.458053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-851119 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-851119 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m39.714220096s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-851119 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-851119 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-851119 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (92.962775ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-851119] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-294290/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-294290/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-851119
	    minikube start -p kubernetes-upgrade-851119 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8511192 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-851119 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-851119 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-851119 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.87146256s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-851119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-851119
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-851119: (2.5650129s)
--- PASS: TestKubernetesUpgrade (351.96s)

                                                
                                    
x
+
TestMissingContainerUpgrade (179.73s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2990154238 start -p missing-upgrade-939436 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2990154238 start -p missing-upgrade-939436 --memory=2200 --driver=docker  --container-runtime=containerd: (1m40.658145285s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-939436
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-939436
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-939436 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-939436 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m14.933454796s)
helpers_test.go:175: Cleaning up "missing-upgrade-939436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-939436
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-939436: (2.291248566s)
--- PASS: TestMissingContainerUpgrade (179.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-124560 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-124560 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (83.902416ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-124560] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-294290/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-294290/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-124560 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-124560 --driver=docker  --container-runtime=containerd: (40.23326415s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-124560 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-124560 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-124560 --no-kubernetes --driver=docker  --container-runtime=containerd: (17.088687597s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-124560 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-124560 status -o json: exit status 2 (311.869339ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-124560","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-124560
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-124560: (1.869600754s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-124560 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-124560 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.140040106s)
--- PASS: TestNoKubernetes/serial/Start (6.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-124560 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-124560 "sudo systemctl is-active --quiet service kubelet": exit status 1 (256.381126ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-124560
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-124560: (1.217375701s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-124560 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-124560 --driver=docker  --container-runtime=containerd: (6.533598292s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-124560 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-124560 "sudo systemctl is-active --quiet service kubelet": exit status 1 (314.023928ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (104.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.455336485 start -p stopped-upgrade-431619 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.455336485 start -p stopped-upgrade-431619 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.141122622s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.455336485 -p stopped-upgrade-431619 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.455336485 -p stopped-upgrade-431619 stop: (20.026630703s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-431619 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0920 18:22:57.115462  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-431619 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.424091526s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (104.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-431619
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestPause/serial/Start (53.54s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-327067 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-327067 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (53.538632921s)
--- PASS: TestPause/serial/Start (53.54s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.44s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-327067 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-327067 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.41526917s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.44s)

                                                
                                    
x
+
TestPause/serial/Pause (0.98s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-327067 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.98s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-327067 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-327067 --output=json --layout=cluster: exit status 2 (351.369396ms)

                                                
                                                
-- stdout --
	{"Name":"pause-327067","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-327067","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-327067 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.87s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.56s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-327067 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-327067 --alsologtostderr -v=5: (1.55672461s)
--- PASS: TestPause/serial/PauseAgain (1.56s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.44s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-327067 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-327067 --alsologtostderr -v=5: (3.442504735s)
--- PASS: TestPause/serial/DeletePaused (3.44s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-327067
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-327067: exit status 1 (19.580749ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-327067: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-775498 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-775498 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (233.18015ms)

                                                
                                                
-- stdout --
	* [false-775498] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-294290/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-294290/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:26:06.480899  488292 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:26:06.481120  488292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:26:06.481149  488292 out.go:358] Setting ErrFile to fd 2...
	I0920 18:26:06.481169  488292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:26:06.481460  488292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-294290/.minikube/bin
	I0920 18:26:06.481938  488292 out.go:352] Setting JSON to false
	I0920 18:26:06.482973  488292 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7717,"bootTime":1726849050,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0920 18:26:06.483100  488292 start.go:139] virtualization:  
	I0920 18:26:06.488148  488292 out.go:177] * [false-775498] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 18:26:06.491412  488292 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 18:26:06.491493  488292 notify.go:220] Checking for updates...
	I0920 18:26:06.495528  488292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:26:06.498250  488292 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-294290/kubeconfig
	I0920 18:26:06.500572  488292 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-294290/.minikube
	I0920 18:26:06.503435  488292 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 18:26:06.507150  488292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:26:06.511008  488292 config.go:182] Loaded profile config "force-systemd-flag-691908": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 18:26:06.511231  488292 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:26:06.550486  488292 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 18:26:06.550603  488292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:26:06.629177  488292 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 18:26:06.614810533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 18:26:06.629292  488292 docker.go:318] overlay module found
	I0920 18:26:06.631982  488292 out.go:177] * Using the docker driver based on user configuration
	I0920 18:26:06.634366  488292 start.go:297] selected driver: docker
	I0920 18:26:06.634384  488292 start.go:901] validating driver "docker" against <nil>
	I0920 18:26:06.634398  488292 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:26:06.637676  488292 out.go:201] 
	W0920 18:26:06.640475  488292 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0920 18:26:06.642768  488292 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-775498 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-775498

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-775498

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-775498

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-775498

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-775498

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-775498

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-775498

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-775498

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-775498

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-775498

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-775498

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-775498" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-775498" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-775498

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-775498"

                                                
                                                
----------------------- debugLogs end: false-775498 [took: 4.169449278s] --------------------------------
helpers_test.go:175: Cleaning up "false-775498" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-775498
--- PASS: TestNetworkPlugins/group/false (4.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (171.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-475170 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0920 18:27:57.112267  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:28:29.144864  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-475170 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m51.367004008s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (171.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (62.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-949863 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-949863 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m2.081191488s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (62.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-475170 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [147748a2-5e37-4a37-94a9-de56b7ea2df1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [147748a2-5e37-4a37-94a9-de56b7ea2df1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004060527s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-475170 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-475170 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-475170 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.472788903s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-475170 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-475170 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-475170 --alsologtostderr -v=3: (12.420268044s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-475170 -n old-k8s-version-475170
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-475170 -n old-k8s-version-475170: exit status 7 (116.604698ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-475170 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-949863 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a2ed7987-b9b7-4d91-b9e7-23813cde878b] Pending
helpers_test.go:344: "busybox" [a2ed7987-b9b7-4d91-b9e7-23813cde878b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a2ed7987-b9b7-4d91-b9e7-23813cde878b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003611364s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-949863 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-949863 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-949863 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.012196594s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-949863 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-949863 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-949863 --alsologtostderr -v=3: (12.108947191s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-949863 -n no-preload-949863
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-949863 -n no-preload-949863: exit status 7 (73.280915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-949863 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (290.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-949863 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0920 18:32:57.112053  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:33:29.144988  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-949863 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m50.288835956s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-949863 -n no-preload-949863
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (290.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5kpgm" [0f8371ff-e772-498e-bd8b-f7f4b7d1478f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003444487s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5kpgm" [0f8371ff-e772-498e-bd8b-f7f4b7d1478f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00376074s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-949863 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-949863 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-949863 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-949863 -n no-preload-949863
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-949863 -n no-preload-949863: exit status 2 (309.133342ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-949863 -n no-preload-949863
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-949863 -n no-preload-949863: exit status 2 (353.854608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-949863 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-949863 -n no-preload-949863
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-949863 -n no-preload-949863
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (59.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-320115 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-320115 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (59.901389886s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (59.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7hgck" [7db74b49-8f02-43a2-857c-b7cb0e61785b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004570266s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7hgck" [7db74b49-8f02-43a2-857c-b7cb0e61785b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004147341s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-475170 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-475170 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-475170 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-475170 --alsologtostderr -v=1: (1.168923372s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-475170 -n old-k8s-version-475170
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-475170 -n old-k8s-version-475170: exit status 2 (490.019167ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-475170 -n old-k8s-version-475170
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-475170 -n old-k8s-version-475170: exit status 2 (437.054246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-475170 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-475170 --alsologtostderr -v=1: (1.316187567s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-475170 -n old-k8s-version-475170
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-475170 -n old-k8s-version-475170
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-849976 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-849976 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m31.477161708s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-320115 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6dbc0f52-7a7e-49bd-8fb7-224b36b29618] Pending
helpers_test.go:344: "busybox" [6dbc0f52-7a7e-49bd-8fb7-224b36b29618] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0920 18:37:57.113053  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [6dbc0f52-7a7e-49bd-8fb7-224b36b29618] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004401906s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-320115 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-320115 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-320115 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.446763064s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-320115 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-320115 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-320115 --alsologtostderr -v=3: (12.333479245s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-320115 -n embed-certs-320115
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-320115 -n embed-certs-320115: exit status 7 (70.723218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-320115 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-320115 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0920 18:38:29.144577  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-320115 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m26.065169698s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-320115 -n embed-certs-320115
E0920 18:42:42.928569  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/no-preload-949863/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-849976 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [833de270-1ef8-400b-aeea-b4dfbc71df48] Pending
helpers_test.go:344: "busybox" [833de270-1ef8-400b-aeea-b4dfbc71df48] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [833de270-1ef8-400b-aeea-b4dfbc71df48] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.006348208s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-849976 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-849976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-849976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.077249797s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-849976 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-849976 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-849976 --alsologtostderr -v=3: (12.150263839s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-849976 -n default-k8s-diff-port-849976
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-849976 -n default-k8s-diff-port-849976: exit status 7 (75.659041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-849976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-849976 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0920 18:40:28.401147  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:28.407607  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:28.419108  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:28.440500  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:28.481908  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:28.563341  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:28.724919  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:29.046585  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:29.688533  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:30.970429  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:33.532687  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:38.654775  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:48.897059  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:41:09.379067  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:41:20.988287  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/no-preload-949863/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:41:20.994722  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/no-preload-949863/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:41:21.006966  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/no-preload-949863/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:41:21.028558  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/no-preload-949863/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:41:21.070269  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/no-preload-949863/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:41:21.151831  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/no-preload-949863/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:41:21.313318  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/no-preload-949863/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:41:21.634930  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/no-preload-949863/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:41:22.276991  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/no-preload-949863/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:41:23.559591  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/no-preload-949863/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:41:26.121388  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/no-preload-949863/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:41:31.243139  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/no-preload-949863/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:41:41.485432  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/no-preload-949863/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:41:50.340890  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:42:01.967172  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/no-preload-949863/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-849976 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m29.383292839s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-849976 -n default-k8s-diff-port-849976
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-54dw2" [e46956c5-8140-439d-8a68-646979cc3b2c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003449831s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-54dw2" [e46956c5-8140-439d-8a68-646979cc3b2c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00562684s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-320115 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-320115 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-320115 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-320115 -n embed-certs-320115
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-320115 -n embed-certs-320115: exit status 2 (328.705277ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-320115 -n embed-certs-320115
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-320115 -n embed-certs-320115: exit status 2 (341.865214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-320115 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-320115 -n embed-certs-320115
E0920 18:42:57.112878  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/functional-129075/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-320115 -n embed-certs-320115
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-046497 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0920 18:43:12.225478  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:43:12.263132  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:43:29.144857  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-046497 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (37.369882283s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-046497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-046497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.080450955s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-046497 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-046497 --alsologtostderr -v=3: (1.261074468s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-046497 -n newest-cni-046497
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-046497 -n newest-cni-046497: exit status 7 (101.633666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-046497 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-046497 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-046497 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (16.985979996s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-046497 -n newest-cni-046497
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gw662" [3c533f08-183b-45bf-a01d-dbb343fa9a8e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003651149s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gw662" [3c533f08-183b-45bf-a01d-dbb343fa9a8e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005069569s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-849976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-046497 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-046497 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-046497 -n newest-cni-046497
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-046497 -n newest-cni-046497: exit status 2 (337.91016ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-046497 -n newest-cni-046497
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-046497 -n newest-cni-046497: exit status 2 (312.987158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-046497 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-046497 --alsologtostderr -v=1: (1.080667287s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-046497 -n newest-cni-046497
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-046497 -n newest-cni-046497
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-849976 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-849976 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-849976 --alsologtostderr -v=1: (1.118081688s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-849976 -n default-k8s-diff-port-849976
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-849976 -n default-k8s-diff-port-849976: exit status 2 (413.347606ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-849976 -n default-k8s-diff-port-849976
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-849976 -n default-k8s-diff-port-849976: exit status 2 (444.224594ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-849976 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-849976 -n default-k8s-diff-port-849976
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-849976 -n default-k8s-diff-port-849976
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.25s)
E0920 18:49:38.889236  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/default-k8s-diff-port-849976/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (55.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-775498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0920 18:44:04.851127  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/no-preload-949863/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-775498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (55.21656204s)
--- PASS: TestNetworkPlugins/group/auto/Start (55.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (91.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-775498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-775498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m31.135427068s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (91.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-775498 "pgrep -a kubelet"
I0920 18:45:00.221736  299684 config.go:182] Loaded profile config "auto-775498": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-775498 replace --force -f testdata/netcat-deployment.yaml
I0920 18:45:00.828065  299684 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hmnmn" [9ca14f8c-f9d0-4504-a9cb-a65560da2a5a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hmnmn" [9ca14f8c-f9d0-4504-a9cb-a65560da2a5a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.005970655s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-775498 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-775498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-775498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-775498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-775498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m3.60750726s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-kbmt6" [202cf192-04d6-4410-95b2-e9fb4b336593] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004855668s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-775498 "pgrep -a kubelet"
I0920 18:45:47.635238  299684 config.go:182] Loaded profile config "kindnet-775498": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-775498 replace --force -f testdata/netcat-deployment.yaml
I0920 18:45:48.028994  299684 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5mhvv" [11cff21e-fc24-456f-afc5-d303de5525eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5mhvv" [11cff21e-fc24-456f-afc5-d303de5525eb] Running
E0920 18:45:56.104781  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/old-k8s-version-475170/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005666309s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-775498 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-775498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-775498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-775498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-775498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (57.216952377s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8xn9k" [0192ec82-835f-4aab-b0f6-4a4d6cf293ef] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005830233s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-775498 "pgrep -a kubelet"
I0920 18:46:41.059287  299684 config.go:182] Loaded profile config "calico-775498": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-775498 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hgtxz" [bd842a70-26a2-4af7-8b4f-e2196d1e86e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hgtxz" [bd842a70-26a2-4af7-8b4f-e2196d1e86e7] Running
E0920 18:46:48.692841  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/no-preload-949863/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003984002s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-775498 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-775498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-775498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-775498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-775498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m18.302087214s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-775498 "pgrep -a kubelet"
I0920 18:47:20.054096  299684 config.go:182] Loaded profile config "custom-flannel-775498": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-775498 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qdrtp" [77607b02-8d7b-4e44-8e1f-c4dc53c69a42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qdrtp" [77607b02-8d7b-4e44-8e1f-c4dc53c69a42] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.00446195s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-775498 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-775498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-775498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-775498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0920 18:48:29.144954  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/addons-545041/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-775498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (53.903680232s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-775498 "pgrep -a kubelet"
I0920 18:48:34.992354  299684 config.go:182] Loaded profile config "enable-default-cni-775498": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-775498 replace --force -f testdata/netcat-deployment.yaml
I0920 18:48:35.361123  299684 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4tmsh" [bc761a56-d261-42b9-bd62-e0b52c683f4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4tmsh" [bc761a56-d261-42b9-bd62-e0b52c683f4b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003933509s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-775498 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-775498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-775498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rqnvk" [5e1981ab-f3d3-406d-aa5d-a22c3122fc6c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006051588s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-775498 "pgrep -a kubelet"
I0920 18:48:57.433821  299684 config.go:182] Loaded profile config "flannel-775498": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-775498 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2x9lc" [8121a347-b961-4e3d-a639-d29a7e23b608] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 18:48:57.909407  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/default-k8s-diff-port-849976/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:48:57.915733  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/default-k8s-diff-port-849976/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:48:57.929355  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/default-k8s-diff-port-849976/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:48:57.950803  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/default-k8s-diff-port-849976/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:48:57.992366  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/default-k8s-diff-port-849976/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:48:58.074177  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/default-k8s-diff-port-849976/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:48:58.235429  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/default-k8s-diff-port-849976/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:48:58.557620  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/default-k8s-diff-port-849976/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:48:59.199901  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/default-k8s-diff-port-849976/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:49:00.481576  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/default-k8s-diff-port-849976/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-2x9lc" [8121a347-b961-4e3d-a639-d29a7e23b608] Running
E0920 18:49:03.043206  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/default-k8s-diff-port-849976/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003862947s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-775498 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (47.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-775498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-775498 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (47.778140879s)
--- PASS: TestNetworkPlugins/group/bridge/Start (47.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-775498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-775498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-775498 "pgrep -a kubelet"
I0920 18:49:55.041991  299684 config.go:182] Loaded profile config "bridge-775498": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-775498 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-znkx6" [d8719bc6-5e23-436f-beeb-3b4e16fec25a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-znkx6" [d8719bc6-5e23-436f-beeb-3b4e16fec25a] Running
E0920 18:50:00.808912  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/auto-775498/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:50:00.815366  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/auto-775498/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:50:00.826803  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/auto-775498/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:50:00.848231  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/auto-775498/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:50:00.889742  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/auto-775498/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:50:00.971293  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/auto-775498/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:50:01.133259  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/auto-775498/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:50:01.454687  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/auto-775498/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:50:02.096957  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/auto-775498/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:50:03.378578  299684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-294290/.minikube/profiles/auto-775498/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004006682s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-775498 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-775498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-775498 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-282367 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-282367" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-282367
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-636410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-636410
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-775498 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-775498

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-775498

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-775498

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-775498

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-775498

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-775498

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-775498

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-775498

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-775498

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-775498

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-775498

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-775498" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-775498" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-775498

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-775498"

                                                
                                                
----------------------- debugLogs end: kubenet-775498 [took: 4.339270098s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-775498" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-775498
--- SKIP: TestNetworkPlugins/group/kubenet (4.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-775498 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-775498

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-775498

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-775498

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-775498

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-775498

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-775498

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-775498

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-775498

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-775498

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-775498

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-775498

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-775498" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-775498

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-775498

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-775498

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-775498

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-775498" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-775498" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-775498

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-775498" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-775498"

                                                
                                                
----------------------- debugLogs end: cilium-775498 [took: 4.96598089s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-775498" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-775498
--- SKIP: TestNetworkPlugins/group/cilium (5.20s)

                                                
                                    
Copied to clipboard